Instilling (Dis-)Trust in AI Products: Recommendations for the Design of Data Security and Data Privacy Labels

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Rarely are we fully informed about the data security and data privacy (DSDP) of artificial intelligence (AI) products and services we use. Providing DSDP information on AI products in an easily accessible and quick-to-process format could, however, help instill appropriate levels of (dis-)trust in (potential) users. Here, participants were presented with hypothetical AI products paired with different labels (graphical vs. text-based) conveying low to high DSDP levels. Expectedly, trust increased and anxiety decreased when an AI product reached a higher DSDP level. That is, labels effectively communicated DSDP differences. Text-based labels were associated with increased trust and decreased anxiety compared to graphical labels. Interestingly, when not provided with DSDP information via a label, participants attributed an intermediate level of (dis-)trust to AI products. These findings illustrate the importance and potential of introducing easy-to-process labels to convey information about AI products, for instance, DSDP information.

Article activity feed