Good-Enough Privacy in Platform Governance: Evidence from Fujian and Busan–Gyeongnam

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial intelligence (AI) platforms in East Asia often elicit privacy concern yet sustain user participation. This study interprets the pattern as bounded compliance—a satisficing equilibrium in which engagement persists once minimum transparency and reliability thresholds are perceived in platform governance. A symmetric adult survey in Fujian, China (N = 185) and Busan–Gyeongnam, Korea (N = 187) examines how accountability visibility and privacy concern jointly shape platform trust and use. Heat-map diagnostics and logit marginal effects show consistently high willingness (≥0.70) across conditions, with stronger accountability sensitivity in Korea and stronger continuity assurance in China. Under high concern, willingness converges to a “good-enough” zone where participation endures despite discomfort. The findings highlight governance thresholds as practical levers for trustworthy AI: enhancing feedback visibility (e.g., case tracking, resolution proofs) and maintaining institutional continuity (e.g., O&M capacity, incident-response coverage) can sustain public confidence in AI-enabled public-service platforms.

Article activity feed