It was released by Bluesky first transparency report this week documenting the actions taken by its Trust & Safety team and the results of other initiatives, such as age safety compliance, influence operations monitoring, automated labeling, and more.
The social media startup – a rival to X and Threads – grew almost 60% in 2025, from 25.9 million users to 41.2 million, which includes accounts hosted on Bluesky’s own infrastructure and those who run their own infrastructure as part of the decentralized social network based on Bluesky’s AT Protocol.
Last year, users made 1.41 billion posts on the platform, representing 61% of all posts ever made on Bluesky. Of these, 235 million posts contain media, accounting for 62% of all media posts shared on Bluesky to date.
The company also reported a fivefold increase in legal requests from law enforcement agencies, government regulators, and legal representatives in 2025, with 1,470 requests, up from 238 requests in 2024.
While the company previously shared reports on the moderation of 2023 and 2024this is the first time it has put together a comprehensive transparency report. The new report deals with other areas besides moderation, such as regulatory compliance and account verification information, among other things.
Average reports from users jumped 54%
Compared to 2024, when Bluesky saw a 17x increase in moderate reports, the company this year reports a 54% increase, from 6.48 million user reports in 2024 to 9.97 million in 2025.
Although the number jumped, Bluesky noted that growth “closely tracked” the 57% user growth that occurred over the same period.
Techcrunch event
Boston, MA
|
June 23, 2026
Approximately 3% of the user base, or 1.24 million users, submitted reports in 2025, with the top categories being “misleading” (which includes spam) at 43.73% of the total, “harassment” at 19.93%, and sexual content at 13.54%.
A catch-all “other” category includes 22.14% of reports that do not fall under these categories, or others such as violence, child safety, violation of site rules, or self-harm, with a smaller percentage.
Out of 4.36 million reports in the “misleading” category, spam accounted for 2.49 million reports.
Meanwhile, hate speech accounted for the largest share of 1.99 million “harassment” reports, with about 55,400 reports. Other areas that saw activity included targeted harassment (about 42,520 reports), trolling (29,500 reports), and doxxing (about 3,170 reports).
However, Bluesky said the majority of “harassment” reports include those that fall into the gray area of anti-social behavior, which can include rude words, but doesn’t fit into other categories, such as hate speech.

Most of the reports of sexual content (1.52 million) are about mislabeling, says Bluesky, which means that the adult content is not correctly marked with metadata – tags that allow users to control their own moderation experience using Bluesky tools.
A smaller number of reports focused on non-consensual intimate imagery (about 7,520), abusive content (about 6,120), and deepfakes (more than 2,000).
Reports focused on violence (24,670 in total) were divided into sub-categories such as threats or incitement (about 10,170 reports), glorification of violence (6,630 reports), and extremist content (3,230 reports).
In addition to user reports, Bluesky’s automated system has flagged 2.54 million potential violations.
An area where Bluesky reports success involves the reduction of daily reports of anti-social behavior on the site, which fell by 79% after implementing a system that identifies toxic responses and reduces their visibility by placing them behind an additional click, similar to what X did.
Bluesky also saw a drop in user reports month over month, with reports per 1,000 monthly active users declining 50.9% from January to December.

In addition to moderation, Bluesky noted that it removed 3,619 accounts for suspected influence operations, likely operating from Russia.
Adding deletions, legal requests
The company said last fall it became more aggressive about moderation and its implementation, and that seems to be true.
Last year, Bluesky took 2.44 million items by 2025, including accounts and content. Last year, Bluesky took over 66,308 accounts, and its automated tooling took over 35,842 accounts.
Moderators also removed 6,334 records, and automated systems deleted 282.

Bluesky also issued 3,192 temporary suspensions in 2025, and 14,659 permanent removals for banning. Most permanent suspensions are targeted at accounts that engage in fraudulent behavior, spam networks, and impersonation.
However, its report suggests it prefers labeling content rather than booting users. Last year, Bluesky applied 16.49 million content labels, which is 200% year-on-year, while account takedown grew 104% from 1.02 million to 2.08 million. Most of the labeling involves mature and suggestive content or nudity.





