Rethinking Trust Formation in AI Diagnostics: Contrasting Human-like and Machine-like Perceptions in User Responses

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

AI-driven medical chatbots allow patients to seek consultations without the constraints of time and space. Understanding how patients' perceptions (AI versus human physicians) influence the trust-building process is crucial for the broader adoption of this technology. This study aims to explore how different perception (Machine and Human-like) of users build trust in AI medical chatbot. And the moderating role of privacy concern on trust in technology and trust in AI. PLS-SEM, t test, and Multigroup analysis were adopted with data collected from 1547 participants, both online and offline. Model comparisons results showed that when AI was perceived as a human-like agent, internal factors (e.g., propensity to trust, perceived health status) had no significant effect on trust. However, when AI was viewed as a machine-like agent, both internal factors (propensity to trust, perceived health status) and external factors (perceived usefulness, ease of use, perceived risk, and brand reputation) significantly influenced trust in technology. In both perception conditions, trust in technology remained a strong predictor of trust in AI, and privacy concern significantly moderated this relationship across both models. This study challenged the conventional belief that human-like AI agent elicits more trust. Instead, users who perceived AI agents as a machine exhibit more rational trust-building mechanisms, with trust shaped by internal factors such as perceived health status. The findings provide a novel perspective for AI healthcare product design and lays a foundation for more personalized diagnostic systems.

Article activity feed