
Until recently, Mrinank Sharma was the head of the security research team at Anthropic, the company behind the popular AI chatbot Claude. He resigned on Monday and publicly PUBLISHED the letter he shared with his team before leaving, in which he expressed a deep concern for… something.
In the LETTERSSharma has spoken extensively about AI’s contribution to safety, citing accomplishments in his role including “understanding Al sycophancy and its causes; developing defenses to reduce risks from Al-assisted bioterrorism; actually putting defenses into production; and writing one of the first Al safety cases.” His final project, he writes, is to “understand how Al’s helpers can make us less human or distort our humanity.”
He also said the job is more important than ever because of a vague and often undefined threat looming on the horizon. “I keep finding myself counting our situation. The world is in danger. And not just from Al, or bioweapons, but from a whole series of interconnected crises that are unfolding at this moment,” he wrote.
Although he doesn’t name those dangers, he leaves a, let’s call it helpful, footnote to further explain: “Some call it a ‘poly-crisis,’ backed by a ‘meta-crisis.’” Okay! To better understand that idea, he quoted the book “First Principles and First Values,” by David J. Temple, and it was very hard to fall.
A few things about that text: First, the subtitle is “Forty-Two Propositions of CosmoErotic Humanism, the Meta-Crisis, and the World to Come.” CosmoErotic Humanism, according to the Center of World Philosophy and Religion, is a “global philosophical movement that aims to reconstruct the collapse of value at the core of global culture,” and “is not just a theory but a movement that changes the very condition of Reality.” Those are the words that are arranged.
In an effort to simplify the concept of CosmoErotic Humanism, it says that we are experiencing a fundamental crisis of humanity due to the loss of a shared understanding of the world and seeks to create a new shared story that frames the evolution of humanity as the “Love Story of the Universe.”
Second, the author of the book, David J. Temple, is not human at all. It is a pseudonym that covers a collection of writers associated with the Center for World Philosophy and Religion, including Marc Gafni and Zak Stein as the main authors. Gafni, it should be noted, was accused sexually assaulted multiple victimsincluding two women who minor at the time of the alleged misconductand before banned from the Jewish Renewal movement of which he was once a part. Maybe not the kind of person you want at the center of society’s “love story.”
Back to Sharma, he said that while he’s proud of the work he’s done to protect AI, he knows it’s time to move on, and seems to suggest that perhaps Anthropic isn’t living up to its carefully curated public image as the “good” AI company. “Throughout my time here, I have seen time and time again how difficult it is to truly allow our values to guide our actions,” he wrote. “I have seen it myself, within the organization, where we are constantly faced with pressures to discard the most important, and in the whole society as well.”
Gizmodo reached out to Anthropic for comment on Sharma’s departure but did not receive a response by press time.
Understandably, Sharma said he wants to do work that allows him to act according to his principles and maintain his integrity. “By holding this situation and listening as much as I can that what I need to do becomes clear,” he wrote. So what clarity did he find, you may be wondering? Will he act as a whistleblower and share what he sees from within the industry and Anthropic in particular? Perhaps he will act as a prominent voice to prevent AI from being used in dangerous and careless ways?
“I hope to pursue a degree in poetry and devote myself to the practice of bold speech,” he wrote. Good luck, man.






