Empathic Artificial Intelligence: The Good and the Bad
Post by Shireen Parimoo
What does it mean to be empathic?
Empathy is one of the most distinguishing human traits. Empathy allows us to take on others’ points of view, share emotional experiences, and help others feel understood and cared for. As a result, empathy facilitates social bonding and helps strengthen interpersonal relationships. There are three main components of empathy:
1. Cognitive empathy is our ability to recognize and understand others’ emotional states.
2. Emotional empathy involves affective resonance, or the ability to share in the emotions of others by feeling those emotions ourselves.
3. Motivational empathy refers to the feelings of care and concern for others that make us want to act to improve their well-being.
Over the years, machines and robots have found their way into many roles that were previously filled by humans. Robotic pets that keep older adults with dementia company reduce their feelings of loneliness and improve their well-being. Chatbots and voice assistants that are powered by AI technology help us with a wide range of situations and provide personalized solutions to our problems. Even empathic conversational AI agents can be used to solicit donations for charitable causes, with features like a trembling voice both showing and eliciting empathy from listeners resulting in more donations. Going a step further, smart journals have been developed to incorporate AI into the journaling process, providing users with real-time feedback and even coaching. Technology like this can be immensely useful for those who are unable to afford therapy or require immediate feedback.
With the advent of large language models like ChatGPT and the adoption of increasingly intelligent technology into our day-to-day lives, there are several ongoing debates surrounding artificial intelligence (AI). Can AI agents be empathic? If so, when is it ethical to use them, if at all? What are the benefits and harms of allowing empathic AI agents to interact with people? Should the use of AI be regulated? This topic overview will touch on some of these questions by introducing examples of human-AI interactions, describing empathic AI and its uses in different contexts, and discussing the pros and cons of empathic AI.
What does empathic AI look like?
People often treat AI similarly to other humans. We ascribe emotional states to AI agents and in interacting with them, experience similar reactions as we would with other humans. For example, Cozmo is a social robot that can express rudimentary forms of happiness and sadness. When denied a fist bump, Cozmo expresses sadness by turning away and making a sad sound. In response to the sad gesture, both children and adults show concern for Cozmo. Similarly, people feel more guilty and ashamed when voice assistants like Siri respond to verbal aggression with empathy rather than avoidance.
Artificially intelligent agents can simulate – if not genuinely feel – some aspects of empathy. ChatGPT, for instance, can recognize the user’s emotional state (cognitive empathy). When informed that "I feel horrible because I failed my chemistry exam”, ChatGPT responded with a sympathetic statement (“I’m sorry to hear that you’re feeling this way”) and showed insight into what the user might be feeling (“It's completely normal to feel disappointed or upset about exam results”). It then provided suggestions for coping with the situation (e.g., “give yourself time to feel”, “focus on the future”), much like a friend or mentor might provide in a similar situation (motivational empathy).
Although AI can simulate expressions of cognitive and motivational empathy, it is unclear if AI can engage in emotional empathy because affective resonance (i.e., the ability to resonate with the emotions of others) may have a neurophysiological basis. For example, people who watch others in pain will activate some of the same brain regions as those who are experiencing the pain. Even seeing pictures of people with a pained facial expression will activate brain areas involved in pain empathy. This ability makes it easier for us to feel what the other person is feeling but may be difficult for a non-biological agent like AI to achieve. Nevertheless, it may be enough that AI agents can express empathy in various situations and elicit specific emotional responses from humans, which raises the question: are there any costs associated with empathic AI?
The benefits and harms of empathic AI
A major risk of adopting AI technology in general is that it can propagate the biases of those who create it. It is already known that many machine learning models and AI tools exhibit biases against certain sociodemographic groups. For instance, an algorithm used in the US healthcare system showed racial bias against Black patients who were predicted by the algorithm to be healthier than their equally sick White counterparts, thereby preventing them from receiving the extra care that they required. ChatGPT also exhibits gender biases against women. When writing recommendation letters, ChatGPT described men in terms of their skills and competence (‘expert’, ‘respectful’) whereas women were described in terms of their appearance and temperament (‘stunning’, ‘emotional’). In fields such as healthcare and technology where racial and/or gender bias is present and minorities are under-represented, these biases have the potential to manifest in harmful ways for the users.
Nonetheless, there are numerous ways that empathic AI can benefit our lives. As mentioned before, conversational AI can increase prosocial behavior by nudging people to donate to charitable causes. In this situation, empathy is not necessarily directed toward but instead evoked in the user. Research indicates that people are receptive to expressions of empathy from AI, which may be particularly useful in the healthcare context. For example, patients are more likely to disclose information, adhere to their treatment, and generally cope better when they perceive their physician to be empathic towards them in their interactions. When healthcare practitioners like physicians and therapists are not readily available (e.g., in between appointments) to provide patient-centered care, empathic AI can fill the gap by providing emotional support as needed.
People can also use empathic AI services such as smart journals in their daily lives without being restricted by cost or the fear of social judgment that often prevents people from seeking help. An AI agent can also provide empathy consistently and reliably because it does not suffer from compassion fatigue, whereas people might begin to feel the burden of continually providing emotional support. However, there is a risk of becoming too dependent on AI for emotional support with potentially negative consequences.
On the other hand, expressions of empathy from AI can be seen as inherently manipulative because AI agents cannot yet truly feel empathy. Empathy offered by healthcare practitioners is driven by their emotional states and past experiences that allow them to relate to their patients, which is something that AI inherently cannot do. Moreover, even though people can benefit from expressions of empathy from AI, this is largely only true when they are aware that they are interacting with AI agents. We may hold AI to a different standard and have different expectations from our interactions with AI agents compared to other people. If people do not realize that they are receiving feedback from AI agents, such as in virtual therapy, its effect can be diluted and even negatively impact well-being, erode trust, and call into question the ethics of using such technology or platforms. Lastly, the potential for manipulation and deception is particularly important to keep in mind and guard against when empathic AI is used in interactions with vulnerable populations like children and the elderly. There are cases where AI has been misused to commit fraud through social engineering, such as conversational AI mimicking the voice of an individual’s family member to obtain sensitive information.
References +
Ashcraft et al. (2016). Women in tech: The facts. Report.
Chin et al. (2020, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems). Empathy is all you need: How a conversational agent should respond to verbal abuse.
Efthymiou & Hildebrand. (2023, IEEE Transactions on Affective Computing). Empathy by design: The influence of trembling AI voices on prosocial behavior.
Inzlicht et al. (2023, Trends in Cognitive Sciences). In praise of empathic AI.
Montemayor et al. (2022, AI & Society). In principle obstacles for empathic AI: Why we can’t replace human empathy in healthcare.
Obermeyer et al. (2019, Science). Dissecting racial bias in an algorithm used to manage the health of populations.
Pelikan et al. (2020, Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction). "Are You Sad, Cozmo?": How humans make sense of a home robot's emotion displays.
Perry, A. (2023, Nature Human Behavior). AI will never convey the essence of human empathy.
Portacolone et al. (2020, Generations). Seeking a sense of belonging.
Singer et al. (2004, Science). Empathy for pain involves the affective but not sensory components of pain.
Srinivasan & González. (2022, Journal of Responsible Technology). The role of empathy for artificial intelligence accountability.
Wan et al. (2023, arXiv). “Kelly is a warm person, Joseph is a role model”: Gender biases in LLM-generated reference letters.
Xiong et al. (2019, Neural Regenerative Research). Brain pathways of pain empathy activated by pained facial expressions: A meta-analysis of fMRI using the activation likelihood estimation method.
Mindsera Smart Journal. https://www.mindsera.com/