Humans Are More Likely to Believe Disinformation Generated By AI - Decrypt

Decrypt
Decrypt

29 Jun 2023 4:40 AM

Even before GPT-4's launch, a new study finds disinformation created with GPT-3 was harder to detect than human-made falsehoods....

  • A new report suggests that OpenAI's AI chatbot, GPT-3, is better at spreading disinformation than humans.
  • The study surveyed participants to determine if they could distinguish between AI-generated and human tweets.
  • The average score was 0.5, indicating that participants could not differentiate between the two.
  • The accuracy of the information in the tweets did not affect participants' ability to identify AI-generated content.
  • The report highlights the potential impact of advanced AI text generators on the dissemination of information and calls for monitoring and regulation.

The article highlights a concerning finding that AI chatbots like GPT-3 can spread disinformation effectively. It emphasizes the need for monitoring and regulation to address the potential negative impact of advanced AI text generators. Overall, the sentiment is negative due to the implications of AI's role in spreading false information.

Go to publisher site

You May Ask

What is the main claim made in the report about OpenAI's GPT-3?How did the study assess participants' ability to distinguish between AI-generated and human tweets?Did the accuracy of the information in the tweets affect participants' ability to identify AI-generated content?What are the potential implications of advanced AI text generators like GPT-3 on the dissemination of information?What measures are suggested in the report to address the potential negative impact of AI-generated disinformation?

Suggested Reads