The threat of AI 'super-spreading' false information

the threat of ai super spreading false information 6585d453a57e7 | Dang Ngoc Duy

According to NewsGuard, an organization in New York that specializes in tracking false information, AI is taking the production and spread of fake news to a new level, when it can create false content about elections, wars, and natural disasters with The speed is fast and difficult to distinguish from real information. Since May, websites containing false articles generated by AI have increased by more than 1,000%, from 49 pages to more than 600 pages.

A user operates on ChatGPT's web interface. Photo: Reuters

A user operates on ChatGPT’s web interface. Photo: Reuters

In the past, spam or misleading content was often created by individuals or organizations for certain purposes. However, by simply entering a few simple requests into generative AI models like ChatGPT , anyone can now create content that the average user will have a hard time distinguishing from real news.

NewsGuard used the example of AI creating a fabricated story that Israeli Prime Minister Benjamin Netanyahu’s doctor had died and leaving a note “suggesting Netanyahu’s involvement”. The doctor’s character is fictional, but the story was reported by an Iranian television program, then published by several media sites, and spread on TikTok, Reddit and Instagram.

“Some websites use artificial intelligence to generate hundreds, even thousands of articles every day. This is why we call AI the next super-spreader of false information,” Jack Brewster, head NewsGuard’s research team, told the Washington Post .

Also according to Brewster, websites posting AI content have generic names, like iBusiness Day or Ireland Top News. To increase persuasion, they often intersperse fake news with real news taken from reputable sources.

“Parallelization makes fraudulent stories more believable,” said professor Jeffrey Blevins, a lecturer specializing in journalism and misinformation at the University of Cincinnati (USA). “There are people who are not knowledgeable enough to recognize false or misleading information.”

Blevins said there are many motivations for creating this type of website, such as attracting views and increasing advertising revenue, but there are also pages created to spread election information or take down political opponents. “This is a big concern,” he said. “The danger lies in the scope and scale of AI, especially when they are becoming intelligent, containing complex and difficult to distinguish content creation algorithms. It is an information war on a scale we have never seen before. seen before”.

Users can detect AI content through writing style, especially “odd grammar” or errors in sentence structure. However, the most effective tool is to improve understanding and get into the habit of checking more information from official sources instead of just believing in a single source.

In April, law professor Jonathan Turley was also included on ChatGPT’s list of scholars who have harassed others . It all started when law professor Eugene Volokh of the University of California asked questions about sexual harassment by lecturers. The chatbot listed five cases, but when Volokh checked, there were three false responses, citing bogus articles from the Washington Post, Miami Herald and Los Angeles Times . Professor Turley’s name appeared on that list.

Earlier this year, Brian Hood, mayor of Hepburn Shire in the Australian state of Victoria, also threatened to sue OpenAI after ChatGPT falsely reported that he had been in prison due to a bribery conviction.

According to the WSJ , AI chatbots currently work by absorbing large amounts of content on the Internet, retrieved from sources like Wikipedia and Reddit, to produce seemingly trustworthy responses to nearly every question asked. They are also trained to recognize sentence patterns to create complete sentences and writings. However, current language models do not have a sufficiently reliable mechanism to authenticate the content they produce.

Bao Lam

Leave a Reply

en_USEN