Can we trust a chatbot like a physician? A qualitative study on understanding the emergence of trust toward diagnostic chatbots

Publikation: Beiträge in ZeitschriftenZeitschriftenaufsätzeForschungbegutachtet

Standard

Can we trust a chatbot like a physician? A qualitative study on understanding the emergence of trust toward diagnostic chatbots. / Seitz, Lennart; Bekmeier-Feuerhahn, Sigrid; Gohil, Krutika.

in: International Journal of Human Computer Studies, Jahrgang 165, 102848, 01.09.2022.

Publikation: Beiträge in ZeitschriftenZeitschriftenaufsätzeForschungbegutachtet

Harvard

APA

Vancouver

Bibtex

@article{044076f7dad2421a9a165fef1271ce43,
title = "Can we trust a chatbot like a physician? A qualitative study on understanding the emergence of trust toward diagnostic chatbots",
abstract = "Technological advancements in the virtual assistants' domain pave the way to implement complex autonomous agents like diagnostic chatbots. Drawing on the assumption that chatbots are perceived as both technological tools and social actors, we aim to create a deep understanding of trust-building processes towards diagnostic chatbots compared to trust in medical professionals. We conducted a laboratory experiment in which participants interacted either with a diagnostic chatbot only or with an additional telemedicine professional before we interviewed them primarily on trust-building factors. We identified numerous software-related, user-related, and environment-related factors and derived a model of the initial trust-building process. The results support our assumption that it is equally essential to consider dimensions of physician and technology trust. One significant finding is that trust in a chatbot arises cognitively, while trusting a human agent is affect-based. We argue that the lack of affect-based trust inhibits the willingness to rely on diagnostic chatbots and facilitates the user's desire to keep control. Considering dimensions from doctor-patient trust, we found evidence that a chatbot's communication competencies are more important than empathic reactions as the latter may evoke incredibility feelings. To verify our findings, we applied the derived code system in a larger online survey.",
keywords = "Business psychology, trust, chatbot, conversational agent, mhealth, anthroporphism, telemedicine, Digital media",
author = "Lennart Seitz and Sigrid Bekmeier-Feuerhahn and Krutika Gohil",
note = "Publisher Copyright: {\textcopyright} 2022",
year = "2022",
month = sep,
day = "1",
doi = "10.1016/j.ijhcs.2022.102848",
language = "English",
volume = "165",
journal = "International Journal of Human Computer Studies",
issn = "1071-5819",
publisher = "Elsevier Ltd",

}

RIS

TY - JOUR

T1 - Can we trust a chatbot like a physician? A qualitative study on understanding the emergence of trust toward diagnostic chatbots

AU - Seitz, Lennart

AU - Bekmeier-Feuerhahn, Sigrid

AU - Gohil, Krutika

N1 - Publisher Copyright: © 2022

PY - 2022/9/1

Y1 - 2022/9/1

N2 - Technological advancements in the virtual assistants' domain pave the way to implement complex autonomous agents like diagnostic chatbots. Drawing on the assumption that chatbots are perceived as both technological tools and social actors, we aim to create a deep understanding of trust-building processes towards diagnostic chatbots compared to trust in medical professionals. We conducted a laboratory experiment in which participants interacted either with a diagnostic chatbot only or with an additional telemedicine professional before we interviewed them primarily on trust-building factors. We identified numerous software-related, user-related, and environment-related factors and derived a model of the initial trust-building process. The results support our assumption that it is equally essential to consider dimensions of physician and technology trust. One significant finding is that trust in a chatbot arises cognitively, while trusting a human agent is affect-based. We argue that the lack of affect-based trust inhibits the willingness to rely on diagnostic chatbots and facilitates the user's desire to keep control. Considering dimensions from doctor-patient trust, we found evidence that a chatbot's communication competencies are more important than empathic reactions as the latter may evoke incredibility feelings. To verify our findings, we applied the derived code system in a larger online survey.

AB - Technological advancements in the virtual assistants' domain pave the way to implement complex autonomous agents like diagnostic chatbots. Drawing on the assumption that chatbots are perceived as both technological tools and social actors, we aim to create a deep understanding of trust-building processes towards diagnostic chatbots compared to trust in medical professionals. We conducted a laboratory experiment in which participants interacted either with a diagnostic chatbot only or with an additional telemedicine professional before we interviewed them primarily on trust-building factors. We identified numerous software-related, user-related, and environment-related factors and derived a model of the initial trust-building process. The results support our assumption that it is equally essential to consider dimensions of physician and technology trust. One significant finding is that trust in a chatbot arises cognitively, while trusting a human agent is affect-based. We argue that the lack of affect-based trust inhibits the willingness to rely on diagnostic chatbots and facilitates the user's desire to keep control. Considering dimensions from doctor-patient trust, we found evidence that a chatbot's communication competencies are more important than empathic reactions as the latter may evoke incredibility feelings. To verify our findings, we applied the derived code system in a larger online survey.

KW - Business psychology

KW - trust

KW - chatbot

KW - conversational agent

KW - mhealth

KW - anthroporphism

KW - telemedicine

KW - Digital media

UR - http://www.scopus.com/inward/record.url?scp=85129540627&partnerID=8YFLogxK

UR - https://www.mendeley.com/catalogue/f03b4a52-88ff-311b-bc4e-a4297809fc3e/

U2 - 10.1016/j.ijhcs.2022.102848

DO - 10.1016/j.ijhcs.2022.102848

M3 - Journal articles

VL - 165

JO - International Journal of Human Computer Studies

JF - International Journal of Human Computer Studies

SN - 1071-5819

M1 - 102848

ER -

DOI