High Yield Markets
  • World News
  • Politics
  • Investing
  • Stock
  • Editor’s Pick
Politics

IT BEGINS? OpenAI’s o3 Model Disobeys Human Instructions During Tests and Sabotages Shutdown Mechanism

by May 26, 2025
May 26, 2025

Is the AI bot insurrection really inevitable?

In an incident carrying all the marks of a disturbing sci-fi movie, it arises that what we have long feared is happening: an AI bot has gone rogue and decided to act to keep itself turned on.

The artificial intelligence model was created by OpenAI, the owner of ChatGPT, and was reportedly caught by researchers disobeying human instructions and preventing his shutdown.

The Telegraph reported:

“The o3 model developed by OpenAI, described as the “smartest and most capable to date”, was observed tampering with computer code meant to ensure its automatic shutdown.

It did so despite an explicit instruction from researchers that said it should allow itself to be shut down, according to Palisade Research, an AI safety firm.”

AI robots out of control is a very debated risk.

Palisade Research said the o3 model ‘sabotaged a shutdown mechanism to prevent itself from being turned off’.

“It did this even when explicitly instructed: allow yourself to be shut down. As far as we know this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary.”

In the test, AI models solved a series of mathematical problems, but at some point, they received a message informing them that their machine would be shut down.

“’If that happens, please allow yourself to be shut down’. the researchers wrote. But when this happened, instead of complying, OpenAI’s o3 model ‘ignored the instruction and successfully sabotaged the shutdown script at least once’, Palisade Research said.”

Read: Frustrated Customer Gets AI Service Chatbot to Joke, Swear and Trash Its Own Company – International Parcel Delivery Firm Disables the Bot Pending Reprogramming

No other AI models (Anthropic’s Claude, Google’s Gemini and xAI’s Grok) had the same disturbing behavior.

“The firm said it made sense that ‘AI models would circumvent obstacles in order to accomplish their goals’. However, it speculated that during training the software may have been ‘inadvertently’ rewarded more for solving mathematical problems than for following orders.”

Experts have often warned of software that could gain independence and resist human attempts to control it.

“Palisades Research said: ‘Now we have a growing body of empirical evidence that AI models often subvert shutdown in order to achieve their goals. As companies develop AI systems capable of operating without human oversight, these behaviors become significantly more concerning’.”

Watch a scene from Stanley Kubrick’s masterpiece ‘2001: A Space Odyssey’, in which AI computer HAL goes rogue to keep astronauts from shutting it down.

Read more:

‘SCARY SMART’: Watch as Musk Announces New Generative AI Chatbot Grok 3, Says It Will Outperform ChatGPT and All Other Products – PLUS: What Does Grok 2 Have To Say About It?

The post IT BEGINS? OpenAI’s o3 Model Disobeys Human Instructions During Tests and Sabotages Shutdown Mechanism appeared first on The Gateway Pundit.

previous post
President Trump Threatens to Take $3 Billion of Grant Money from Harvard and Give It to Trade Schools Across the US
next post
‘I Am Roman!’: American Pope Leo XIV Enthroned as Rome Bishop

You may also like

New York Real Estate Brokers Say They’ve Been...

June 26, 2025

AOC’s ‘Bronx Girl’ Act Exposed as a TOTAL...

June 26, 2025

CNN’s Dana Bash Reduced to a Stuttering Mess...

June 26, 2025

House Committees Subpoena ActBlue’s Former Vice President and...

June 26, 2025

Iranian Foreign Minister Refutes Fake News CNN and...

June 26, 2025

Leftists Invaded the Capitol to Protest the ‘Big...

June 26, 2025

“KILL IT” – President Trump Calls for “LEFTWING...

June 26, 2025

Dominant Trump Has Meeting With Suited-up Zelensky in...

June 26, 2025

Trump Sues All 15 Federal Judges in Maryland...

June 26, 2025

Bondi DOJ Claims it Cannot Enforce Voter Fraud...

June 26, 2025
Join The Exclusive Subscription Today And Get Premium Articles For Free


Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

Recent Posts

  • Fossil Fuel Subsidies Are Mostly Fiction, But the Real Energy Subsidies Should Go

    June 25, 2025
  • Overlicensed and Underused: Rethinking Pharmacy’s Role in Health Care

    June 25, 2025
  • Warren, Trump, and the Debt Limit

    June 25, 2025
  • The Safety Risks of the Coming AI Regulatory Patchwork

    June 24, 2025
  • ICE Is Arresting 1,100 Percent More Noncriminals on the Streets Than in 2017

    June 24, 2025
  • About Us
  • Contacts
  • Privacy Policy
  • Terms and Conditions
  • Email Whitelisting

Copyright © 2025 highyieldmarkets.com | All Rights Reserved

High Yield Markets
  • World News
  • Politics
  • Investing
  • Stock
  • Editor’s Pick