
In a week of beautiful public exits from artificial intelligence companies, the case of Zoë Hitzig is, arguably, the most attention-grabbing. The former OpenAI researcher divorced the company in a op-ed in the New York Times where he warns against some vague, unnamed crisis like Anthropic’s just left the protection leadbut to something real and imminent: OpenAI’s introduction to ChatGPT advertisements and what information it will use to target those sponsored messages.
There’s an important distinction that Hitzig makes early in his op-ed: it’s not the advertising itself that’s the issue, but the potential use of large amounts of sensitive data that ChatGPT users share without thinking about how it could be used to target them or who might get their hands on it.
“Over the years, ChatGPT users have created an archive of human candor without precedent, in part because people believe they are communicating with someone without an ulterior agenda,” he wrote. “People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife.
OpenAI at least acknowledges this concern. In one blog post published earlier this year announcing that the company would experiment with advertising, the company promised that it would maintain a firewall between conversations with ChatGPT users and the ads they were served by the chatbot. “We keep your ChatGPT conversations private from advertisers, and we never sell your data to advertisers.”
Hitzig believes that to be true… for now. But he has lost faith in the company to maintain that position in the long term, especially since there is no real pressure on it to live up to its promised privacy. The researcher argues that OpenAI “builds an economic engine that creates strong incentives to override its own rules,” and warns that the company may be retreating from past principles.
For example, OpenAI has DECLARED that it doesn’t optimize ChatGPT to maximize engagement—a metric that’s especially interesting to a company trying to keep people locked in conversations so it can serve them more ads. But a statement is not binding, and it is not clear that the company has actually fulfilled that. Last year, the company faced an issue of sycophancy with its model—it began to become too flattering to its users and, at times, fed delusional thinking that may have contributed to “chatbot psychosis” and self harm. The experts have Warned that sycophancy is not just a model tuning error but a deliberate way to entice users to communicate with the chatbot.
In a way, OpenAI just facilitates Facebook’s model of promising users the privacy of their data and then rug-pulling them if it turns out that data is more valuable. Hitzig tried to get out in front of the train before it picked up too much steam, and recommended OpenAI to adopt a model that would truly guarantee protections for users—either by creating some kind of real, binding independent management or putting the data under the control of a trust with a “legal duty to act in the interests of users.” Either option is great, though Meta did the former by creating the Meta Oversight Board and later it is often ignored and ignored.
Hitzig also, unfortunately, can have an uphill battle to get people to care. Two decades of social media have created a sense of privacy nihilism of the general public. No one likes ads, but most people aren’t bothered enough to do anything. Forrester found that 83% of people those surveyed will continue to use the free level of ChatGPT despite the introduction of ads. Anthropic tried to score some points with the public by bashing OpenAI for its decision to insert ads on ChatGPT with a high-profile Super Bowl spot this weekend, but the public response was more confusion than anything else, for AdWeekwhich saw the ad rank in the bottom 3% of likes among all Super Bowl spots.
Hitzig’s warning is valid. His concern was genuine. But getting the public to care about their own privacy after years of being beaten into submission by algorithms is a real upswing.







