Artificial empathy in healthcare chatbots: Does it feel authentic?

Publikation: Beiträge in ZeitschriftenZeitschriftenaufsätzeForschungbegutachtet

Standard

Artificial empathy in healthcare chatbots: Does it feel authentic? / Seitz, Lennart.
in: Computers in Human Behavior: Artificial Humans, Jahrgang 2, Nr. 1, 100067, 07.2024.

Publikation: Beiträge in ZeitschriftenZeitschriftenaufsätzeForschungbegutachtet

Harvard

APA

Vancouver

Bibtex

@article{675eaadc974d4a0aa74420081dcb35f7,
title = "Artificial empathy in healthcare chatbots: Does it feel authentic?",
abstract = "Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of empathetic (feeling with), sympathetic (feeling for), and behavioral-empathetic (empathetic helping) vs. non-empathetic responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, empathetic, and sympathetic responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of {\textquoteleft}perceived authenticity{\textquoteright} and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.",
keywords = "Management studies",
author = "Lennart Seitz",
year = "2024",
month = jul,
doi = "10.1016/j.chbah.2024.100067",
language = "English",
volume = "2",
journal = "Computers in Human Behavior: Artificial Humans",
issn = "2949-8821",
publisher = "Elsevier B.V.",
number = "1",

}

RIS

TY - JOUR

T1 - Artificial empathy in healthcare chatbots

T2 - Does it feel authentic?

AU - Seitz, Lennart

PY - 2024/7

Y1 - 2024/7

N2 - Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of empathetic (feeling with), sympathetic (feeling for), and behavioral-empathetic (empathetic helping) vs. non-empathetic responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, empathetic, and sympathetic responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of ‘perceived authenticity’ and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.

AB - Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of empathetic (feeling with), sympathetic (feeling for), and behavioral-empathetic (empathetic helping) vs. non-empathetic responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, empathetic, and sympathetic responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of ‘perceived authenticity’ and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.

KW - Management studies

UR - https://www.mendeley.com/catalogue/df9c8d1f-18c3-3fa5-86ee-e047430dd07a/

U2 - 10.1016/j.chbah.2024.100067

DO - 10.1016/j.chbah.2024.100067

M3 - Journal articles

VL - 2

JO - Computers in Human Behavior: Artificial Humans

JF - Computers in Human Behavior: Artificial Humans

SN - 2949-8821

IS - 1

M1 - 100067

ER -

Dokumente

DOI