- Joined
- Aug 15, 2020
- Posts
- 5,506
it would be corrupted with MAGAts.
Can artificial intelligence technology help fix difficult problems? It’s a generic question that you see more of these days, and the answer is typically no. But a new study published in the journal Science is hopeful that large language models may actually be a useful tool in changing the minds of conspiracy theorists who believe incredibly stupid things.
If you’ve ever talked with someone who believes ridiculous conspiracy theories—from a belief that the Earth is flat to the idea humans never actually landed on the Moon—you know they can be pretty set in their ways. They’re often resistant to changing their minds, getting more and more mentally dug in as they insist something about the world is actually explained by some very implausible theory.
A new paper titled “Durably Reducing Conspiracy Beliefs Through Dialogues With AI,” tested AI’s ability to communicate with people who believed conspiracy theories and convince them to reconsider their worldview on a particular topic.
The study involved two experiments with 2,190 Americans who used their own words to describe a conspiracy theory they earnestly believe. The participants were encouraged to explain the evidence they believe supports their theory and then they engaged in a conversation with a bot built on the large language model GPT-4 Turbo, which would respond to the evidence given by the human participants. The control condition involved people talking with the AI chatbot about some topic unrelated to the conspiracy theories.
gizmodo.com
Can artificial intelligence technology help fix difficult problems? It’s a generic question that you see more of these days, and the answer is typically no. But a new study published in the journal Science is hopeful that large language models may actually be a useful tool in changing the minds of conspiracy theorists who believe incredibly stupid things.
If you’ve ever talked with someone who believes ridiculous conspiracy theories—from a belief that the Earth is flat to the idea humans never actually landed on the Moon—you know they can be pretty set in their ways. They’re often resistant to changing their minds, getting more and more mentally dug in as they insist something about the world is actually explained by some very implausible theory.
A new paper titled “Durably Reducing Conspiracy Beliefs Through Dialogues With AI,” tested AI’s ability to communicate with people who believed conspiracy theories and convince them to reconsider their worldview on a particular topic.
The study involved two experiments with 2,190 Americans who used their own words to describe a conspiracy theory they earnestly believe. The participants were encouraged to explain the evidence they believe supports their theory and then they engaged in a conversation with a bot built on the large language model GPT-4 Turbo, which would respond to the evidence given by the human participants. The control condition involved people talking with the AI chatbot about some topic unrelated to the conspiracy theories.

New Study Suggests AI Could Convince Conspiracy Theorists They're Wrong
Could a Debunkbot change your mind?
