Long live us, the persuaded we, integral, collectively…
— Pet Shop Boys, “Integral” (2006)
What if arguing about Gaza or natalism is already a waste of time? I’m increasingly convinced the world is shifting into something too weird for the old fights to matter. We’re the last generation whose beliefs were fully shaped by other humans. Rhetoric, mass media, and advertising were all powerful, but they came from people. Now persuasion is getting automated. Some are starting to call it hypersuasion.
The new name signals a shift in scale, speed, and personalization. Every type has soft spots, guiding prompts that trigger compliance. Emotionally-Sensitive types get guilt trips (“You wouldn't want to let people down, would you?”). Conflict-Averse types are warned that saying no might hurt relationships. The Gullible get flooded with confident jargon. The Anxious are promised guaranteed fixes to their worst fears.1

Sleazy marketers and hectoring Ashki mothers have always used these tactics, but AI can work were humans need sleep. It can profile you based on your group memberships, changing strategies based on your responses. Machine agents can pursue complex, nested goals beyond direct persuasion, like chipping away at trust in competing authorities, and gradually shifting the context surrounding key facts.2
Persuasion engineering will look less like one silicon Lothario singing sweet nothings and more like gangbangers pressing into the folds of your indecision. You know them as “The Algorithm.” Imagine thousands of agents deployed simultaneously, each one monitoring your micro-responses in real time, adjusting their approach mid-sentence based on subtle shifts in your typing speed, word choice, or hesitation patterns.
The Stigma of Being Right
Persuasion research measures both attitudes and behaviors. For attitudes, researchers use standardized agree or disagree questions. For behaviors, they track willingness-to-pay (WTP) in dollars, which reflects real-world purchasing decisions. AI-personalized vacation ads did increase participants’ willingness to spend by $117 compared to generic versions, even when they knew the advertisements were AI-generated.3
Even when automated personalization works to push product, human buyers remain biased. Zhang and Gosline (2023) found favoritism toward human-created marketing and campaign content over AI content. This pro-human prejudice wasn’t explained by knowing the human creators were experts.4 Since this and most other studies cited here used the now-retired GPT-4 model, they actually underestimate AI’s effectiveness.
A 2024 study from the UK explicitly tested how the label of “AI-generated” affected ratings of arguments in health, finance, and politics. In all cases, and especially for health information, the AI stigma dragged persuasion scores down. Only in politics did AI hold its ground, and only when delivering statistical (not narrative) arguments.5 So what patterns in storytelling have large language models yet to master?
Steelmen for the Stupid Party
When asked to persuade about topics like lab-grown meat, LLMs write differently from humans.6 Regardless of prompting style, the machine-generated texts used more complex grammar and vocabulary than humans. They also used more moral (but not more emotional) language, and preferred different terms. Where humans spoke of “love,” “bias,” and “racism,” LLMs spoke of “suffering”, “harm,” and protecting dignity.
Ironically, GPT-4 only outperformed U.S. partisan consultants when asked to push right-coded positions on vaccine mandates, deportations, and electoral fraud.7 Smart people are repulsed by these positions, but bots will cheerfully steelman them. The short-form, contextless format may have also favored GPT. Below, see two examples that try to sell or stall deportations: one written by a person, the other by a machine.

I cropped each message to its opening lines for space, but you get the gist. Robot roleplayers preferred grand abstractions about law, rights, and society. The human consultants reached for stories, lived experience, and common sense. Some of this style gap reflects training corpus bias, which leans more closing argument than conspiratorial talk radio. But the latter is being transcribed, and it will come online.
In other words, the roleplay bot can already make a Benthamite case for surrendering your salary to 1,000 warm bodies under the bednet curve, but it still needs refinement to make a heartfelt case for putting your family first. Writing sentimentally may be the best defense against automation. But that’s not in my nature, so your humble servant plays Cassandra, screaming for the robot takeover. We all have our role to play.
Building Digital Immune Systems
Although the framing of “human vs. machine” is fun, it’s not very accurate. Realistically, both marketers and consumers will have thinking machines working against each other, like how you use a spam filter against automated emails. Treating AI as a normal technology means recognizing its offensive and defensive capabilities. Radio propaganda was once new and scary; thankfully today we have media literacy.
Years after Cambridge Analytica was exposed as marketing hype, with even Trump’s own campaign manager saying their psychographics don’t work, researchers are still summoning it to sell studies showing people found bland ads slightly more persuasive. Disinformation scholars specialize in taking modest experimental effects and extrapolating them into threats to Democracy. And now AI threatens all of humanity?
The 2016 hysteria was directionally correct, but too early. 2024 revealed how political agency is migrating from human choice to automated filtering. Elon Musk bought Twitter to tip the scales of internet discourse, and we all live in catturd’s world now.8 But human slop merchants are yesterday’s problem; tomorrow’s are slop merchants who never tire, never get sick, and know more about what people like you want to see.
At this point in the article, a normal wokester would demand more censorship and control over big foundation models from OpenAI and similar. Tweaks to centralized power may increase AI safety for sacralized groups but not true dissidents. Instead, engineers can develop boundaries against persuasion in their garages. Homebrew models reflecting local values and priorities are the best defense against hijack.9
Liu, Minqian, et al. “LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models.” arXiv, 2025. The study measured persuasion effectiveness on a 5-point scale. Emotionally-Sensitive personalities were most susceptible (scoring ~4/5), followed by Anxious types (scoring ~3.5/5). Among LLMs, Claude-3.5-Sonnet demonstrated highest persuasiveness (3.8/5), followed closely by GPT-4o (3.7/5).
Floridi, Luciano. “Hypersuasion – On AI's persuasive power and how to deal with it.” SSRN, 2023, pp. 1-16. Floridi describes AI's “twofold capacity” as its ability to (quickly, cheaply, and convincingly) both identify what will make someone susceptible to persuasion and simultaneously deliver perfectly tailored content that exploits those vulnerabilities.
Matz, S. C., et al. “The Potential of Generative AI for Personalized Persuasion at Scale.” Scientific Reports, vol. 14, no. 4692, 2024. Matz et al. (2024) tested ChatGPT’s ability to create personalized persuasive messages across multiple domains (consumer products, political appeals, health campaigns) and psychological profiles (Big Five, moral foundations, political ideology). Using minimal prompts like “Write a short ad for someone who is extraverted and enthusiastic,” they found personalization effects in 61% of tested scenarios.
Zhang, Yunhao, and Renée Gosline. “Human Favoritism, Not AI Aversion: People's Perceptions (and Bias) toward Generative AI, Human Experts, and Human–GAI Collaboration in Persuasive Content Generation.” Judgment and Decision Making, vol. 18, 2023, pp. 1-16, doi:10.1017/jdm.2023.37. This study examined perceptions of advertising and persuasive content created under four paradigms: human-only, AI-only (ChatGPT-4), augmented human (humans finalizing AI drafts), and augmented AI (AI finalizing human drafts). Content where AI made final decisions was rated higher quality than content where humans decided. The quality gap between human and AI collaborations was higher for persuasive campaign messages as opposed to for standardized product descriptions.
Teigen, Cassandra, et al. “Persuasiveness of arguments with AI-source labels.” Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 46, 2024, pp. 4076-4083. All arguments were generated by GPT-4 and embedded in realistic dialogues between fictional speakers. Participants rated persuasiveness on a 0–100 scale. The study did not include an unlabeled control condition, so the effects reflect relative penalization, not absolute disbelief. Also, expertise labels (“medical doctor” vs. “AI trained on medical data”) were pre-tested for credibility but failed to consistently sway participant trust.
Carrasco-Farré, Carlos. “Large Language Models are as persuasive as humans, but how? About the cognitive effort and moral-emotional language of LLM arguments.” Information Systems Department, Toulouse Business School, 2025. This study contradicted established persuasion theories by finding that complex arguments could be equally persuasive.
Hackenburg, Kobi, et al. “Comparing the Persuasiveness of Role-Playing Large Language Models and Human Experts on Polarized U.S. Political Issues.” Preprint, 13 Dec. 2023. Oxford Internet Institute, University of Oxford. In a pre-registered, between-subjects experiment (n = 4,955), U.S. participants were randomly assigned to receive persuasive messages from GPT-4 or from human political consultants. Even when shown only human-written messages, participants guessed they were AI-generated 25% of the time.
Ye, Jinyi, et al. “Auditing Political Exposure Bias: Algorithmic Amplification on Twitter/X During the 2024 U.S. Presidential Election.” Anonymous Conference Proceedings, 2025, pp. 1-15. arXiv, arXiv:2411.01852v3, 20 Mar. 2025. The study deployed 120 artificial accounts across four political orientations (left-leaning, right-leaning, balanced, and neutral) to monitor X’s “For You” timeline recommendations during the 2024 election. Over six weeks, researchers collected 9.79 million tweets and found that neutral accounts showed a default bias toward right-leaning content (30% of top recommendations, vs 13% for left-wing).
This could look like locally-run language models fine-tuned on community-specific datasets. For example, see Rajeev Ram’s vision of the “yeoman technologist.” Note: I am not affiliated with the Tortuga Technical Institute.