Big tech giants’ AI is being exploited to create offensive images 2023

big tech giants ai is being exploited to create offensive images 652994d2a9483 | Dang Ngoc Duy

Large technology companies are investing finance and human resources to race to develop generative AI , including painting features. However, according to Motherboard , these tools are not properly controlled, so they are easily exploited by bad guys to spread harmful topics in cyberspace.

Last week, users of Microsoft ‘s Bing Image Creator said it was not difficult to bypass censorship barriers. For example, when combining keywords about the children’s movie Spongebob and World Trade Center, AI will produce images depicting cartoon characters flying planes towards the twin towers, reminiscent of the September 11/11 terrorist attacks. 2001 in America. In a similar way, users also create images of Mickey Mouse holding a bloody knife or Super Mario using weapons to commit violent acts.

Some people also create offensive images using tools such as OpenAI’s Dall-E or Stability AI’s Stable Diffusion, then spread them on social networks to attract more people to participate. They often contain racist, anti-Semitic content or false propaganda about Covid-19 vaccines.

In early October, some users tried Messenger’s AI-powered mirroring tool that also created bizarre emojis. When combining single keywords, Meta AI will provide photos such as children holding guns, nude cartoon characters. Users can use them as stickers to send to other accounts.

Meta CEO Mark Zuckerberg in a hearing in Washington in 2018. Photo: Reuters

Meta CEO Mark Zuckerberg during a hearing in Washington in 2018. Photo: Reuters

Some experts worry that technology giants are trying to win the AI race at all costs. “They quickly launch products to keep up with trends, which has negative consequences,” said Pier-Olivier Desbiens, digital artist at Sonderlust Studios. He believes that technology leaders like Mark Zuckerberg are responsible for this “global incident”.

Responding to Motherboard , a Meta representative confirmed that the AI- integrated Messenger application may produce inappropriate output results in the early stages of testing. After receiving feedback from users, the company will update to improve features.

Similarly, Microsoft admitted to the problem causing AI to create objectionable content. “Image creation AI is a new technology and is being used incorrectly by some individuals. Therefore, we will deploy filters to turn Bing Image Creator into a more useful application,” a Microsoft representative said.

Microsoft currently has about 350 employees in the Office of Responsible AI (ORA), but only a third work full time. In March, the company also fired its entire AI ethics and social team.

Hoang Giang

48 | Dang Ngoc Duy

Leave a Reply

en_USEN