
OpenAI, the company best known for its AI applications such as ChatGPT and Sora, is reportedly working on a social media network designed to be free of AI bots. The catch is that users have to scan their irises to access.
Forbes reported Wednesday, citing unnamed sources familiar with the project, that the platform is still in the early stages and is being developed by a small group of less than 10 people. The goal is to create a human-only social platform that requires users to prove that they are real people. To do that, the team is reportedly considering implementing identity verification through Apple’s Face ID or through Orb, an Orwellian eye-scanning device made by a company also quickly founded by OpenAI CEO Sam Altman.
This new social media platform seems to be Altman’s latest attempt to solve the problem that he and his partner “AI architects” helped create.
Altman first tried to solve the bot problem in 2019 when he founded Tools for Humanity, the company behind the World app, formerly known as Worldcoin. The project aims to create a global ID and a crypto-based currency that can only be used by verified people. The project has since become a “super app“called World with messaging and payment features. But verification requires people to scan their eyes with the Orb device the size of a soccer ball in exchange for a unique digital ID code stored on their phone. In theory, this could help filter out annoying AI bots from gaming, social media platforms, or even financial transactions like concert ticket sales.
So far, almost 17 million people have been verified using Orb, far from the company’s stated goal of reaching. one billion users. Part of the problem with adoption is logistical. People must physically travel to one of the 674 verification locations around the world to scan their eyes. In the US, there are only 32 locations, most of them in Florida. More broadly, the idea of having your eyes scanned by a company founded by one of Silicon Valley’s most controversial figures is not an easy sell.
Not surprisingly, many countries already have temporarily banned or investigations launched to the company’s biometric technology, citing concerns about data privacy and security.
Now, that technology looks like it might be making its way to a new social media network. Sources told Forbes that the new social platform will allow users to create and share AI-generated content such as images and videos. And while OpenAI has proven it can create popular apps, it’s unclear whether a new social network can meaningfully draw people away from existing platforms, especially when you add biometric verification as a barrier.
Only ChatGPT now comes to approx 700 million weekly usersand the company’s AI video app racked up about one million downloads within five days of its launch. In comparison, Meta reported in September that its platforms, which include Facebook, WhatsApp, and Instagram, have now reached 3.5 billion daily active users combined. All of these now allow users to create and share AI-generated content.
OpenAI seems to be hoping that its promise of a bot-free environment will be enough to attract users.
Altman himself has repeatedly expressed himself frustration with bots online. In September, Altman responded to a post that featured comments on the ClaudeCode subreddit praising OpenAI’s coding agent Codex. “I had the most amazing experience reading this: I thought everything was fake/bots, although in this case I know that the codex is growing fast and the trend here is real,” he wrote in a post in X.
He continued to theorize why this might happen, pointing to people who choose “LLM-speak traits” and also “maybe some bots.” “But the net effect is somehow the AI twitter/AI reddit feels very fake in a way that it didn’t really have a year or two ago.” Written by Altman.
A few days later, Altman wrote another post that he hasn’t taken the dead internet theory seriously, “but there seems to be a lot of twitter accounts run by LLM now.”
The dead internet theory states that as of 2016, most of the internet is dominated by bots and AI-generated content rather than real human activity. But maybe someone besides Altman can be trusted to find a solution.







