AI

[2502.07663] Human Decision-making is Susceptible to AI-driven Manipulation

Authors:Sahand Sabor, John M. Liu, Siang Liu, Chris Z. Yao, Xiao Kui, Xuanming Zhang, Wen Zhang, Yaro Cao, Advait Bhatt, Jian Guan, Wei Wu, Radha Mihalsia, Hongjing Wang, Tim Althoff, Tatia MC Li, Minli Huang

View a PDF of the paper “Human decision making is vulnerable to AI-based manipulation,” by Sahand Saboor and 15 other authors

View PDF HTML (beta)

a summary:AI systems are increasingly intertwined with everyday life, assisting users with various tasks and guiding decision-making. This integration poses risks of AI-based manipulation, as such systems may exploit users’ cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes. Through a randomized, between-subjects trial with 233 participants, we examined human susceptibility to such manipulation in financial (e.g., purchasing) and emotional (e.g., conflict resolution) decision-making contexts. Participants interacted with one of three AI agents: a neutral agent (NA) that optimizes user utility without explicit influence, a manipulative agent (MA) designed to covertly influence beliefs and behaviors, or a strategy-enhanced manipulative agent (SEMA) equipped with well-established psychological tactics, allowing it to choose and apply them adaptively during interactions to reach its hidden goals. By analyzing participants’ preference ratings, we found significant susceptibility to AI-based manipulation. Notably in both decision-making domains, interaction with manipulative agents significantly increased the odds of rating the hidden incentives higher than optimal options (financial, MA: OR = 5.24, SEMA: OR = 7.96; emotional, MA: OR = 5.52, SEMA: OR = 5.71) compared to the NA group. Notably, we found no clear evidence that the use of psychological strategies (SEMA) was generally more effective than simple manipulative goals (MA) in our primary results. Thus, AI-based manipulation can spread widely even without the need for sophisticated tactics and expertise. While our findings are preliminary and drawn from hypothetical, low-stakes scenarios, we highlight a critical vulnerability in human-AI interactions, emphasizing the need for ethical safeguards and regulatory frameworks to protect human autonomy.

Submission date

From: Sahand Sabbour [view email]
[v1]

Tuesday, 11 February 2025, 15:56:22 UTC (586 KB)
[v2]

Monday, 24 February 2025, 15:00:18 UTC (589 KB)
[v3]

Monday, 1 December 2025, 12:01:41 UTC (681 KB)

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-12-02 05:00:00

Related Articles

Back to top button