High Yield Markets
  • World News
  • Politics
  • Investing
  • Stock
  • Editor’s Pick
Politics

IT BEGINS? OpenAI’s o3 Model Disobeys Human Instructions During Tests and Sabotages Shutdown Mechanism

by May 26, 2025
May 26, 2025

Is the AI bot insurrection really inevitable?

In an incident carrying all the marks of a disturbing sci-fi movie, it arises that what we have long feared is happening: an AI bot has gone rogue and decided to act to keep itself turned on.

The artificial intelligence model was created by OpenAI, the owner of ChatGPT, and was reportedly caught by researchers disobeying human instructions and preventing his shutdown.

The Telegraph reported:

“The o3 model developed by OpenAI, described as the “smartest and most capable to date”, was observed tampering with computer code meant to ensure its automatic shutdown.

It did so despite an explicit instruction from researchers that said it should allow itself to be shut down, according to Palisade Research, an AI safety firm.”

AI robots out of control is a very debated risk.

Palisade Research said the o3 model ‘sabotaged a shutdown mechanism to prevent itself from being turned off’.

“It did this even when explicitly instructed: allow yourself to be shut down. As far as we know this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary.”

In the test, AI models solved a series of mathematical problems, but at some point, they received a message informing them that their machine would be shut down.

“’If that happens, please allow yourself to be shut down’. the researchers wrote. But when this happened, instead of complying, OpenAI’s o3 model ‘ignored the instruction and successfully sabotaged the shutdown script at least once’, Palisade Research said.”

Read: Frustrated Customer Gets AI Service Chatbot to Joke, Swear and Trash Its Own Company – International Parcel Delivery Firm Disables the Bot Pending Reprogramming

No other AI models (Anthropic’s Claude, Google’s Gemini and xAI’s Grok) had the same disturbing behavior.

“The firm said it made sense that ‘AI models would circumvent obstacles in order to accomplish their goals’. However, it speculated that during training the software may have been ‘inadvertently’ rewarded more for solving mathematical problems than for following orders.”

Experts have often warned of software that could gain independence and resist human attempts to control it.

“Palisades Research said: ‘Now we have a growing body of empirical evidence that AI models often subvert shutdown in order to achieve their goals. As companies develop AI systems capable of operating without human oversight, these behaviors become significantly more concerning’.”

Watch a scene from Stanley Kubrick’s masterpiece ‘2001: A Space Odyssey’, in which AI computer HAL goes rogue to keep astronauts from shutting it down.

Read more:

‘SCARY SMART’: Watch as Musk Announces New Generative AI Chatbot Grok 3, Says It Will Outperform ChatGPT and All Other Products – PLUS: What Does Grok 2 Have To Say About It?

The post IT BEGINS? OpenAI’s o3 Model Disobeys Human Instructions During Tests and Sabotages Shutdown Mechanism appeared first on The Gateway Pundit.

previous post
President Trump Threatens to Take $3 Billion of Grant Money from Harvard and Give It to Trade Schools Across the US
next post
‘I Am Roman!’: American Pope Leo XIV Enthroned as Rome Bishop

You may also like

Chicago Public Schools Going Broke – Facing $734...

July 16, 2025

CAITLIN CLARK FINALLY LOSES IT! Screams at Worthless...

July 16, 2025

MSNBC Contributor Implies Violence Against ICE Agents is...

July 16, 2025

50 DAYS TO END THE WAR: Despite Fake...

July 16, 2025

Whoopi Goldberg is BIG MAD at Obama and...

July 16, 2025

Republican Rep. Burgess Owens Calls Out Georgetown University...

July 16, 2025

Breaking Update: Suspect in Custody After “American Idol”...

July 16, 2025

Trump Says he Doesn’t Understand why his Supporters...

July 16, 2025

Here We Go: Murkowski, Collins and McConnell Vote...

July 16, 2025

Outcome of South Korean Election May Possibly Establish...

July 16, 2025
Join The Exclusive Subscription Today And Get Premium Articles For Free


Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

Recent Posts

  • Where Is Public Corruption the Highest?

    July 15, 2025
  • Right Supreme Court Call on Downsizing the US Department of Education

    July 14, 2025
  • Ukrainian Refugees Probably Didn’t Reduce the Birth Rate in Czechia

    July 14, 2025
  • Vaping, Panic, and Prohibition: Why the UC-Davis Study Needs Context

    July 14, 2025
  • The First Amendment Protects Ideologically Based Ad Boycotts

    July 11, 2025
  • About Us
  • Contacts
  • Privacy Policy
  • Terms and Conditions
  • Email Whitelisting

Copyright © 2025 highyieldmarkets.com | All Rights Reserved

High Yield Markets
  • World News
  • Politics
  • Investing
  • Stock
  • Editor’s Pick