An artificial intelligence A researcher left his job at US company Anthropic this week with a cryptic warning about the state of the world, marking the latest resignation in a wave of departures over security risks and ethical dilemmas.
In a letter published on XMrinank Sharma wrote that he achieved everything he had hoped for during his time at AI security firm and was proud of its efforts, but dismissed fears that “the world is at risk,” not just from artificial intelligence, but from “a whole series of interrelated crises,” ranging from bioterrorism to concerns about the “sycophancy” of the industry.
He said he felt called to write, to earn a degree in poetry and to devote himself to “the practice of speaking boldly.”
“During my time here, I’ve seen time and time again how difficult it is to truly let our values guide our actions,” he continued.
Anthropic was founded in 2021 by a splinter group of former OpenAI employees who promised to design a more security-focused approach to AI development than its competitors.
Get daily national news
Get the day’s top news, political, economic and current affairs headlines delivered to your inbox once a day.
Sharma led the company’s AI protection research team.
Anthropic has published reports touting the security of its own products, including Claude, its hybrid-thinking large-language model, and touts itself as a company committed to building reliable and understandable AI systems.
The company faced criticism last year after it agreed to pay US$1.5 billion to settle a class-action lawsuit by a group of authors who claimed the company used pirated versions of their work to train its AI models.
Sharma’s resignation comes the same week that OpenAI researcher Zoë Hitzig announced her resignation in the essay in the New York Timesciting concerns about the company’s advertising strategy, including placing ads in ChatGPT.
“I used to believe I could help people building AI to face the problems it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I joined to help answer,” she wrote.
“People are talking to chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates the potential to manipulate users in ways we don’t have the tools to understand, let alone prevent.”
Anthropic and OpenAI recently got into a public spat after Anthropic published a commercial for the Super Bowl criticizing OpenAI’s decision to display ads on ChatGPT.
In 2024, OpenAI CEO Sam Altman said he was not a fan of using ads and would use them as a “last resort.”
Last week, he disputed the ad’s claim that the ad embed was deceptive with a lengthy post criticizing Anthropic.
“I guess it’s on brand Anthropic doublespeak to use a deceptive ad to criticize theoretical deceptive ads that aren’t real, but a Super Bowl ad is not where I’d expect it to be,” he wrote, adding that the ads will still provide free access, which he said creates “agency.”
Employees at rival companies – Hitzig and Sharma – both expressed serious concerns about the erosion of guiding principles established to preserve the integrity of artificial intelligence and protect its users from manipulation.
Hitzig wrote that a potential “erosion of OpenAI’s own principles for maximizing engagement” may already be happening at the company.
Sharma said he was concerned about AI’s ability to “distort humanity”.
© 2026 Global News, a division of Corus Entertainment Inc.








