OpenAI disputes watchdog’s allegation that it violated California’s new AI law with the release of GPT-5.3-Codex



OpenAI may have violated California’s new AI safety law with the release of its latest coding model, according to the allegations from the AI ​​watchdog group.

A violation could expose the company to millions of dollars in fines, and the case could be a first attempt that sets the first test of the provisions of the new law.

An OpenAI spokesperson disputed the watch dog’s position, saying luck the company is “confident in our compliance with border security laws, including SB 53.”

The controversy centers on GPT-5.3-Codex, OpenAI’s latest coding model, which was released last week. The model is part of OpenAI’s effort to regain its lead in AI-powered coding and, according to benchmark data released by OpenAI, shows higher performance in coding tasks than previous versions of the model from OpenAI and competitors such as Anthropic. However, the model also raises unprecedented cybersecurity concerns.

CEO Sam Altman said the model is first hit “high” risk category for cybersecurity in the company’s Preparedness Framework, an internal risk classification system that OpenAI uses for model releases. This means that OpenAI essentially classifies the model as having enough coding capability to potentially facilitate significant cyber damage, especially when automated or used at scale.

The AI ​​watchdog group Midas Project claims that OpenAI failed to live up to its own safety commitments—which are now legally binding under California law—with the launch of the new high-risk model.

SB 53 in California, which began in January, requires major AI companies to publish and maintain their own safety frameworks, detailing how they can prevent catastrophic risks—defined as incidents that cause more than 50 deaths or $1 billion in property damage—from their models. It also prohibits companies from making misleading statements about compliance.

OpenAIs safety system requires special safeguards for high-risk cybersecurity models designed to prevent AI from going rogue and doing things like acting fraudulently, sabotaging safety research, or hiding its true capabilities. However, the company did not implement these safeguards before launching GPT-5.3-Codex despite declaring the model “high risk,” according to the Midas Project.

OpenAI says that the Midas Project’s interpretation of the wording of its Preparedness Framework is wrong, although it also says that the wording of the framework is “ambiguous” and that it seeks to clarify the intent of the wording of that framework with a statement in the safety report released by the company on GPT-5.3-Codex. In that safety report, OpenAI said that additional safeguards are only necessary when high cyber risk occurs “in conjunction with” high autonomy—the ability to operate independently for long periods of time. Since the company believes that GPT-5.3-Codex lacks this autonomy, they say that safeguards are not needed.

“GPT-5.3-Codex completed our full testing and management process, as detailed in the publicly released system card, and did not demonstrate high autonomy capabilities based on proxy evaluations and confirmed by internal expert judgments, including from our Safety Advisory Group,” the spokesperson said. The company also said, however, that it lacks a definitive way to assess the high autonomy of a model and therefore relies on tests that are believed to be able to act as proxies for this metric as it works to develop better evaluation methods.

However, some safety researchers dispute OpenAI’s interpretation. Nathan Calvin, vice president of state affairs and general counsel at Encode, said in a post on X: “Instead of admitting that they didn’t follow their plan or update it before the release, OpenAI seems to say that the criteria are not clear. From reading the related documents…

The Midas Project also claimed that OpenAI could not prove that the model lacked the autonomy needed for further steps, as the company’s previous, less advanced model topped global benchmarks for autonomous task completion. The group argued that even if the rules were unclear, OpenAI should have clarified them before releasing the model.

Tyler Johnston, founder of the Midas Project, called the potential violation “especially embarrassing given how low the floor SB 53 sets: basically just adopt a voluntary safety plan of your choice and communicate honestly about it, change it if necessary, but not violate or lie about it.”

If an investigation is opened and the allegations are proven to be accurate, SB 53 allows for substantial penalties for violations, which can reach millions of dollars depending on the severity and duration of the non-compliance. A representative of the California Attorney General’s Office spoke luck the department is “committed to enforcing our state’s laws, including those enacted to increase transparency and safety in the emerging AI space.” However, they said the department was unable to comment on, either confirming or denying, potential or ongoing investigations.

Updated, February 10: This story has been updated to reflect OpenAI’s statement that it believes it is in compliance with the California AI law earlier in the story. The headlines were also changed to make it clear that OpenAI disputed the allegations from the watch dog group. In addition, the story has been updated to clarify that OpenAI’s statement in the GPT-5.3-Codex safety report was intended to clarify what the company said was unclear language in its Preparedness Framework..



Source link

  • Related Posts

    Is Wall Street Bullish or Bearish?

    DaVita Inc. (DVA) is a leading healthcare provider specializing in kidney care services, including dialysis treatments for patients with chronic kidney failure. Headquartered in Denver, Colorado, it operates outpatient centers…

    Form 13G SPACSphere Acquisition Corp. For: 10 February

    Form 13G SPACSphere Acquisition Corp. For: 10 February Source link

    Leave a Reply

    Your email address will not be published. Required fields are marked *