Skip to content
Tech News
← Back to articles

State media control influences large language models

read original more articles

Palmer, A. & Spirling, A. Large language models can argue in convincing ways about politics, but humans dislike AI authors: implications for governance. Polit. Sci. 75, 281–291 (2023).

Bai, H. et al. LLM-generated messages can persuade humans on policy issues. Nat. Commun. 16, 6037 (2025).

Hackenburg, K. & Margetts, H. Evaluating the persuasive influence of political microtargeting with large language models. Proc. Natl Acad. Sci. USA 121, e2403116121 (2024).

Salvi, F. et al. On the conversational persuasiveness of GPT-4. Nat. Hum. Behav. 9, 1645–1653 (2025).

Costello, T. H., Pennycook, G. & Rand, D. G. Durably reducing conspiracy beliefs through dialogues with AI. Science 385, eadq1814 (2024).

Carrasco-Farre, C. Large language models are as persuasive as humans, but how? About the cognitive effort and moral-emotional language of LLM arguments. Preprint at https://arxiv.org/abs/2404.09329 (2024).

Tessler, M. H. et al. AI can help humans find common ground in democratic deliberation. Science 386, eadq2852 (2024).

Goldstein, J. A. et al. How persuasive is AI-generated propaganda? PNAS Nexus 3, pgae034 (2024).

Fisher, J. et al. Biased LLMs can influence political decision-making. In Proc. 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (eds Che, W. et al.) 6559–6607 (Association for Computational Linguistics, 2025).

Saenger, T.R. et al. AutoPersuade: a framework for evaluating and explaining persuasive arguments. In Proc. 2024 Conference on Empirical Methods in Natural Language Processing (eds Al-Onaizan, Y., Bansal, M. & Chen, Y.-N.) 16325–16342 (Association for Computational Linguistics, 2024).

... continue reading