Humans Are More Likely to Believe Disinformation Generated By AI - Decrypt
Even before GPT-4's launch, a new study finds disinformation created with GPT-3 was harder to detect than human-made falsehoods....
- A new report suggests that OpenAI's AI chatbot, GPT-3, is better at spreading disinformation than humans.
- The study surveyed participants to determine if they could distinguish between AI-generated and human tweets.
- The average score was 0.5, indicating that participants could not differentiate between the two.
- The accuracy of the information in the tweets did not affect participants' ability to identify AI-generated content.
- The report highlights the potential impact of advanced AI text generators on the dissemination of information and calls for monitoring and regulation.
The article highlights a concerning finding that AI chatbots like GPT-3 can spread disinformation effectively. It emphasizes the need for monitoring and regulation to address the potential negative impact of advanced AI text generators. Overall, the sentiment is negative due to the implications of AI's role in spreading false information.