Artificial empathy in healthcare chatbots: Does it feel authentic?

Research output: Journal contributionsJournal articlesResearchpeer-review

Authors

Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of empathetic (feeling with), sympathetic (feeling for), and behavioral-empathetic (empathetic helping) vs. non-empathetic responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, empathetic, and sympathetic responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of ‘perceived authenticity’ and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.
Original languageEnglish
Article number100067
JournalComputers in Human Behavior: Artificial Humans
Volume2
Issue number1
Number of pages17
ISSN2949-8821
DOIs
Publication statusPublished - 07.2024

Documents

DOI