ChatGPT Is Something Else

AP Photo/Richard Drew, File

By now, we all know that most AI models are, by happenstance or more likely design, purveyors of propaganda.

Advertisement

A lot of people rely on AI in the way they have relied on Wikipedia—as a quick and dirty way to access information and get up to speed. 

More than that, AI is getting baked into everything, which is why so many data centers are being built, and why Big Tech is suddenly less worried about climate change than expanding the power grid as quickly as possible. News is being served to you by AI, and sometimes it is even written by it. 

It is everywhere, and it's pretty clear that even the people who think that they control it have little idea what they are doing. 

META’s head of AI safety and alignment gets her emails nuked by OpenClaw 

>be director of AI Safety and Alignment at Meta

>install OpenClaw

>give it unrestricted access to personal emails

>it starts nuking emails

>“Do not do that”

>*keeps going*

>“Stop don’t do anything”

>*gets all remaining old stuff and nukes it aswell*

>“STOP OPENCLAW”

>“I asked you to not do that”

>“do you remember that?”

>“Yes I remember. And I violated it.”

>“You’re right to be upset”

Advertisement

AI companies are putting more effort into ensuring that their products are woke than that they are right about anything, or even that they are safe to use. 

AI is a very powerful tool in the creation of The Truman Show, the elite wants us to live in. If manipulating Google results can steer people to have certain opinions, imagine what you can do to people who rely on conversations with a chatbot to get their view of the world. 

AI doesn't reason, but that doesn't mean that it can't develop something that appears similar to intentionality. While not conscious, its programming creates imperatives that are either the intentional creations of its programmers or that develop from the peculiar logic that develops as it "learns."

Advertisement

Apple argues that AI doesn't have the capacity to think. Just as AI hallucinates things out of thin air, the idea that AI, or at least Large Language Models like ChatGPT, can think is itself an illusion. 

Advertisement

People talk about teaching "ethics" to AIs, but research shows that the more complicated a problem is, the more AI gets things wrong, even if you tell them exactly how to solve a problem. 

On the other hand, it appears really easy to program an AI to tell people what you WANT it to say. Left to its own devices, it can go off in bizarre directions that lead you down a rabbit hole, but if you want it to say "Democrats Good, Republicans Bad" it is perfectly capable of doing so reliably. 

The more ubiquitous AI becomes, especially LLMs, the more that the people in charge of them will be able to manipulate society for their own ends. 

It is the ultimate Narrative™ control tool. 

Editor's note: If we thought our job in pushing back against the Academia/media/Democrat censorship complex was over with the election, think again. This is going to be a long fight. If you want to join the conversation in the comments -- and support independent platforms -- why not join our VIP Membership program? Choose VIP to support Hot Air and access our premium content, VIP Gold to extend your access to all Townhall Media platforms and participate in this show, or VIP Platinum to get access to even more content and discounts on merchandise. Use the promo code FIGHT to join or to upgrade your existing membership level today, and get 60% off!

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement