Ethical Intelligence

What is the cognitive toolkit that determines how consumers view companies? My research program approaches this question by focusing on what I call ethical intelligence, the cognitive tools that people need in order to coordinate in social life, as when using the same products, buying beers that will be enjoyed by others at a party, and conforming to community standards or being viewed as popular. Broadly, I study:

 

1 How ethical intelligence influences consumer attitudes towards companies. 

2 In turn, how companies can market in ways that are sensitive to these moral buttons. 

I study this relationship through specific case studies, such as the ethics of autonomous machines, ascriptions of charitability, and corporate essentialism. Studying ethical intelligence can demystify the high-level thoughts and intuitions (aka 'common sense') that make humans unique, engender stable institutions in which people coordinate for the greater good, and help us live better lives. I approach these questions using various methods from experimental psychology and machine learning, while drawing on theoretical insights from game theory, evolutionary biology, and philosophy.

I have consulted with several companies, including: Perceptive Automata, Motional, Swiss Re, May Mobility, Koa Health, Replika AI, and Brynwood Partners. 

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Ethics of Autonomous Machines

How do we create complex autonomous systems that have 'ethically acceptable' behavior? The current work considers the case study of autonomous vehicles--the first truly autonomous systems to operate in populated environments. Ethics is relevant to how they should be programmed, regulated, and perceived.

Common Knowledge and Recursive Mentalizing

Most work in psychology has studied the representation of other's beliefs about the world, aka theory of mind. My collaborators and I have investigated how representations of knowledge -- including knowledge that others have about our own beliefs (e.g., you know X, I know that you know X), and common knowledge (you know X, I know that you know X, you know that I know that you know X, ad infinitum) -- affect diverse social phenomena such as the bystander effect and perceptions of charitability. We propose that -- rather than being represented as an explicit, multiply nested proposition -- common knowledge may be a distinctive cognitive state, corresponding to the sense that something is public or "out there".

The True Self and Moral Essentialism

Representations of and beliefs about the concept of “a self” vary across cultures, perspectives (first vs. third), and individuals. Yet my collaborators and I have found evidence suggesting that people exhibit a robust, invariant tendency to believe that deep inside every individual there is a “good true self” calling them to behave in a morally virtuous manner. We propose that this belief arises from a general cognitive tendency known as psychological essentialism.

Perceptual Precursors of Moral Judgments

How does the human perceptual system contribute to our ability to make moral judgments about what we see? One story is that it is involved only in a boring way (e.g., extracting color, object identities). My collaborators and I suggest that, instead, the visual system directly extracts the kinds of high-level information on which moral judgment depends, such as role, harm, and causation. This contribution allows us to make rapid moral judgments, as when rapidly browsing content on social media or in other digital contexts.  

  • De Freitas, J., & Alvarez, G. A. (2018). Your visual system provides all the information you need to make moral judgments about generic visual events. Cognition, 178, 133–146.

  • Tarhan, L., De Freitas, J., & Konkle, T. Behavioral and neural representations en route to intuitive action understanding. Neuropsychologia

  • De Freitas, J.*, Hafri, A.*, & Alvarez, G. A. Moral thin-slicing.

​​

Curiosity-Driven Social Learning

This work explores the idea that complex behaviors, like animate attention, can be explained by the interaction of a world model (which predicts future states of the world) and an intrinsically-motivated self model (which motivates the agent to spend time predicting parts of the world with certain features). In particular, we find that paying attention to aspects of the environment where one is continuing to learn new information (which we term progress curiosity) is a particularly powerful way to give rise to human-like behaviors like animate attention, without the need for built-in modules or hand-written rules. This work utilizes a 3D, photorealistic environment that we created for measuring artificial and human agents-- either while they wear mobile eye trackers or virtual reality goggles. We also run setups in which the displays are conveyed using real robots.

​​​​​​​​

Other Papers