top of page

Ethical Intelligence

Julian is a cognitive scientist who conducts research on ethical intelligence, at the intersection of psychology, ethics, and artificial intelligence. He studies how the ethical intelligence of consumers influences their attitudes toward companies, and how companies can market in ways that are sensitive to these moral buttons in turn. He studies these relationships through specific case studies, such as the ethics of automated machines like autonomous vehicles and companion chatbots, and by studying attitudes toward firms more broadly, as when they morally deteriorate or partake in brand activism. His work helps managers remove barriers to AI adoption, proactively address risk factors arising from the autonomous nature of their products, and determine when and how to become publicly involved in morally relevant matters.  

He has consulted with several companies, including: Perceptive Automata, Motional, Swiss Re, May Mobility, Koa Health, Replika AI, and Brynwood Partners. 


AI Ethics

How do we create complex autonomous systems that have ‘ethically acceptable’ behavior, and convince consumers of this? This work considers the case study of autonomous vehicles—the first truly autonomous systems to operate in populated environments. Ethics is relevant to how they should be programmed, regulated, and marketed. 

  • De Freitas, J., Censi, A., Smith, B. W., Di Lillo, L., Anthony, S. E., & Frazzoli, E. (2021). From driverless dilemmas to more practical common-sense tests for automated vehicles. Proceedings of the National Academy of Sciences. [supp. materials]

  • De Freitas, J., Anthony, S. A., Censi, A., & Alvarez, G. A. (2020). Doubting driverless dilemmas. Perspectives on Psychological Science.

  • De Freitas, J., & Cikara, M. (2021). Deliberately prejudiced self-driving vehicles elicit the most outrage. Cognition. [supp. materials]

  • De Freitas, J., Uǧuralp, K., & Oguz, Z. Ethical Risks of Autonomous Products: The Case of Mental Health Crises on AI Companion Applications. Under review

  • De Freitas, J. Unselfish alibis increase choices of selfish autonomous vehicles. Under review

AI Liability

  • De Freitas, J., Zhou, X., Atzei, M., Boardman, S., Di Lillo, L. Public perception and autonomous vehicle liability. Under review.

  • De Freitas, J. Will we blame self driving cars? The Wall Street Journal.

Corporate Identity and Moral Essentialism

Do ethical considerations pervade even the everyday concepts that people use to keep track of and evaluate companies and brands? This work studies how the concepts that people normally use to think about or advise individual people—such as authenticity and meaningfulness—are also used to understand non-human entities like companies and brands. In particular, these studies uncover a default tendency for people to believe that deep inside every individual and organization there is a “good true self” calling them to behave in a morally virtuous manner. We propose that this belief arises from a general cognitive tendency known as moral essentialism. 

Moral Thin Slicing

How does the human perceptual system contribute to our ability to make moral judgments about what we see? One story is that it is involved only in a boring way (e.g., extracting color, object identities). My collaborators and I suggest that, instead, the visual system directly extracts the kinds of high-level information on which moral judgment depends, such as role, harm, and causation. This contribution allows us to make rapid moral judgments, as when rapidly browsing content on social media or in other digital contexts.  

  • De Freitas, J., & Hafri, A. Moral thin-slicing: How snap judgments affect online sharing of moral content. SSRN.

  • De Freitas, J., & Alvarez, G. A. (2018). Your visual system provides all the information you need to make moral judgments about generic visual events. Cognition, 178, 133–146.

  • Tarhan, L., De Freitas, J., & Konkle, T. Behavioral and neural representations en route to intuitive action understanding. Neuropsychologia.

Common Knowledge and Recursive Mentalizing

Most work in psychology has studied the representation of other's beliefs about the world, aka theory of mind. My collaborators and I have investigated how representations of knowledge -- including knowledge that others have about our own beliefs (e.g., you know X, I know that you know X), and common knowledge (you know X, I know that you know X, you know that I know that you know X, ad infinitum) -- affect diverse social phenomena such as the bystander effect and perceptions of charitability. We propose that -- rather than being represented as an explicit, multiply nested proposition -- common knowledge may be a distinctive cognitive state, corresponding to the sense that something is public or "out there".



Curiosity-Driven Social Learning

This work explores the idea that complex behaviors, like animate attention, can be explained by the interaction of a world model (which predicts future states of the world) and an intrinsically-motivated self model (which motivates the agent to spend time predicting parts of the world with certain features). In particular, we find that paying attention to aspects of the environment where one is continuing to learn new information (which we term progress curiosity) is a particularly powerful way to give rise to human-like behaviors like animate attention, without the need for built-in modules or hand-written rules. This work utilizes a 3D, photorealistic environment that we created for measuring artificial and human agents-- either while they wear mobile eye trackers or virtual reality goggles. We also run setups in which the displays are conveyed using real robots.


Other Papers

bottom of page