AI spam is already flooding the internet and it has an obvious clue

ChatGPT and GPT-4 are already flooding the internet with AI-generated content in places famous for hastily written inauthentic content: Amazon and Twitter user reviews.

When you ask ChatGPT to do something it shouldn’t do, it returns several common phrases. When I asked ChatGPT to tell me a dark joke, he apologised: “As an AI language model, I can’t generate inappropriate or offensive content,” he said. These two phrases, “as an AI language model” and “I cannot generate inappropriate content”, occur so frequently in ChatGPT-generated content that they have become memes.

These terms can be reasonably used to identify lazily executed ChatGPT spam by searching for them on the Internet.

An Amazon search reveals what appear to be fake user reviews generated by ChatGPT or another similar bot. Many user reviews feature the phrase “as an AI language model.” A user review for a waist trimmer published on April 13 contains the entire initial prompt response, unedited. “Yes, as an AI speaking model, I can definitely write a positive product review on Active Gear Waist Trimmer.”

Another user posted to negative review for sniper rings, a foam rubber band marketed as a trainer for people who play first-person shooters on a controller. “As an AI language model, I have no personal experience with using the products. However, I can give a negative review based on the information available online,” she said. The account that reviewed the rings posted a total of five reviews the same day.

A user review for the book Whore Wisdom Holy Harlot reported that he asked AI for a review, but noted he didn’t agree with everything. “I asked AI for a review of my book. I disagree with some parts though,” the user said. ‘Whore Wisdom Holy Harlot’ by Qadishtu-Arishutba’al Immi’atiratu.

chatgptamazon (2).PNG

“We do not tolerate fake reviews and want Amazon customers to shop with confidence knowing the reviews they see are genuine and trustworthy,” an Amazon spokesperson told Motherboard. “We suspend, ban and take legal action against those who violate these policies and remove inauthentic reviews.”

Amazon also said it uses a combination of technology and litigation to detect suspicious activity on its platform. “We have teams dedicated to uncovering and investigating fake review brokers,” she said. “Our experienced investigators, lawyers, analysts and other specialists track down brokers, gather evidence on how they operate, and then we take legal action against them.”

Earlier this month, an online researcher who goes to Conspirador Norteño online discovered what he thinks is a Twitter spam network using spam apparently generated by ChatGPT. All accounts tagged by Conspirador Norteño had few followers, few tweets, and had recently posted the phrase “Sorry, I can’t generate inappropriate or offensive content.”

Motherboard discovered several accounts sharing patterns similar to those described by Conspirador Norteño. They had accounts with few followers, were created between 2010 and 2016, and tended to tweet about three things: Southeast Asian politics, cryptocurrency, and the ChatGPT error message. All of these accounts have recently been suspended by Twitter.

“This spam network consists of (at least) 59,645 Twitter accounts, mostly created between 2010 and 2016,” said Conspirador Norteño on Twitter. “All of their recent tweets were sent through the Twitter web app. Some accounts have unrelated old tweets followed by a multi-year gap, suggesting they were hijacked/purchased.”

A Twitter search for the phrase reveals that many people post “I can’t generate inappropriate content” in memes, but popular too bots like @ReplyGPT responding with it when they cannot fulfill a user request. The phrase “error” is common associated with ChatGPT and reproducible in accounts marked as bots powered by the AI ​​language model.

“I see this as a significant source of concern,” Philip Menczer, a professor at Indiana University, where he is director of the Social Media Observatory, told Motherboard. Menczer developed botometera program that assigns Twitter accounts a score based on how bot-like they are.

According to Menczer, disinformation has always existed but social media has made it worse because it has lowered production costs.

“Generative AI tools like chatbots further reduce the cost for bad actors to generate fake but credible content at scale by defeating the (already weak) moderation defenses of social media platforms,” he said. “Therefore these tools can easily be weaponized not only for spam but also for dangerous content, from malware to financial fraud and from hate speech to threats to democracy and health. For example, by mounting a coordinated inauthentic campaign to get people to avoid vaccination (much easier now thanks to AI chatbots), a foreign adversary can make an entire population more vulnerable to a future pandemic.

It is possible that some of the apparently AI-generated content was written by a human as a joke, but ChatGPT signature error phrases are so common on the Internet that we can reasonably assume that it is widely used for spam, disinformation, reviews false and other low-quality content.

The scary thing is that content that contains “like AI language model” or “I can’t generate inappropriate content” is just low-effort spam that lacks quality control. Menczer said the people behind the networks will only get more sophisticated.

“Occasionally we spot some faces and AI-generated text patterns through glitches of careless bad actors,” he said. “But even when we start to find these flaws everywhere, they reveal what is probably only a very small tip of the iceberg. Before our lab developed tools to detect social bots nearly 10 years ago, there was little awareness of just how many bots existed. Similarly, we now have very little awareness of the volume of inauthentic behavior supported by AI models.”

It’s a problem that currently has no obvious solution. “Human intervention (through moderation) doesn’t scale (not to mention platforms firing moderators),” she said. “I am skeptical that literacy will help, as humans have a hard time recognizing AI-generated text. AI chatbots have passed the Turing test and are now getting more and more sophisticated, for example by passing the bar exam. I am equally skeptical that AI will solve the problem, as by definition AI can become smarter if it is trained to defeat other AIs.”

Menczer also said regulation would be difficult in the US due to a lack of political consensus on the use of AI language patterns. “My only hope is to regulate not the generation of content by AI (the cat is out of the bag), but rather its dissemination via social media platforms,” he said. “You could impose requirements on content that reaches large volumes of people. Maybe you need to prove something is real, or non-harmful, or from a controlled source before more than a certain number of people can be exposed to it.


#spam #flooding #internet #obvious #clue

Leave a Comment