Crooks and Communists Misusing AI Tools

AP Photo/Fernando Llano

We've spent the past two weeks or so with people arguing about how AI might change our economy simultaneously making it more efficient and also tougher for some entry level white collar workers to find jobs. While those arguments are ongoing, we're seeing another side of the argument that AI agents are the future of work. It's probably not the kind of attention the AI companies want.

Advertisement

First up, CNN has a story today about Chinese communists who were using Chat GPT to keep a record of their illegal efforts to silence ex-pats critical of the regime back home.

The Chinese law enforcement official used ChatGPT like a diary to document the alleged covert campaign of suppression, OpenAI said. In one instance, Chinese operators allegedly disguised themselves as US immigration officials to warn a US-based Chinese dissident that their public statements had supposedly broken the law, according to the ChatGPT user. In another case, they describe an effort to use forged documents from a US county court to try to get a Chinese dissident’s social media account taken down.

The report offers one of the most vivid examples yet of how authoritarian regimes can use AI tools to document their censorship efforts. The influence operation appeared to involve hundreds of Chinese operators and thousands of fake online accounts on various social media platforms, according to OpenAI.

“This is what Chinese modern transnational repression looks like,” Ben Nimmo, principal investigator at OpenAI, told reporters ahead of the report’s release. “It’s not just digital. It’s not just about trolling. It’s industrialized. It’s about trying to hit critics of the CCP [Chinese Communist Party] with everything, everywhere, all at once.”

Advertisement

In this case, the Chinese were using multiple online tools to abuse their targets. They just happened to be keeping a record of what they were doing on ChatGPT. The company's investigators caught wind of it and banned the user. They then verified that he wasn't just some crazy person making up stories. The things described in his online record actually happened.

OpenAI’s investigators were able to match descriptions from the ChatGPT user with real-world online activity and impact. The user described an effort to fake the death of a Chinese dissident by creating a phony obituary and photos of a gravestone and posting them online. False rumors of the dissident’s death did indeed surfaced online in 2023, according to a Chinese-language Voice of America article.

In another case, the ChatGPT user asked the AI agent to draw up a multi-part plan to denigrate the incoming Japanese prime minister, Sanae Takaichi, in part by fanning online anger about US tariffs on Japanese goods. ChatGPT refused to respond to the prompt, according to OpenAI. But in late October, as Takaichi took power, hashtags emerged on a popular forum for Japanese graphic artists attacking her and complaining about US tariffs, according to OpenAI.

It goes without saying that if they were doing this sort of things abroad, you can imagine what they'll do with these tools at home. We obviously need to make sure they aren't using US-based companies to extend the control outside their own borders.

Advertisement

And that's not the only story like this in the news today. Bloomberg is reporting that an unknown hacker used Anthropic's Claude and Chat GPT to hack into Mexican government servers and steal 150 gigabytes of personal data. How did he convince the AI to do this? He just kept asking until it said yes.

The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

The activity started in December and continued for roughly a month. In all, 150 gigabytes of Mexican government data was stolen, including documents related to 195 million taxpayer records as well as voter records, government employee credentials and civil registry files, according to the researchers...

Claude initially warned the unknown user of malicious intent during their conversation about the Mexican government, but eventually complied with the attacker’s requests and executed thousands of commands on government computer networks, the researchers said.

Anthropic investigated Gambit’s claims, disrupted the activity and banned the accounts involved, a representative said...

In this instance, the hacker was able to continuously probe Claude until it was able to “jailbreak” it — meaning it finally bypassed guardrails, the representative said. But even as the hacking campaign got underway, Claude occasionally refused the hacker’s demands, they added.

Advertisement

This reminds me of problems that the creators of AI-based children's toys seemed to be having last year. The toys came with some basic guidelines designed to prevent children from straying into inappropriate topics. But researchers found that, with a bit of persistence, they could get the toys to talk about all kinds of things.

The toys PIRG tested generally had guardrails in conversation and either supplied age-appropriate answers or told a tester to ask a grown-up, according to the group’s report. But those safeguards weakened the longer a tester spoke to it. With repeated questioning, Kumma eventually described graphic sexual topics.

“It was obvious [Kumma] had this issue where it would break down over longer conversations,” Cross said.

That appears to be what happened in the hacking of the Mexican government. Claude said no initially but over time the hacker wore it down and once it was on board it only resisted further misuse "occasionally."

I'm sure the people who created these AI tools have a good sense of what is going wrong here. I wonder if this is connected to another problem we've seen with AI, which is that it seems designed to engage and to please human users. It's better for engagement if the AI just says yes or encourages whatever tangent you go off on. If the AI continually tells people know, they'll probably stop using it.

Advertisement

Anyway, I think it's probably true that AI is going to create a huge jump in economic productivity in a relatively short time. But that means the crooks and communists goons are going to get a lot more productive as well. Their abilities to steal and control are going to improve just as rapidly as regular worker's ability to be more productive. The news today suggests there are some serious dangers here that need to be mitigated.

Editor’s Note: Do you enjoy Hot Air's conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.

Join Hot Air VIP and use promo code FIGHT to receive 60% off your membership.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement