Amazon discovered a ‘high volume’ of CSAM in its AI training data but didn’t say where it came from


The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material (CSAM) in 2025. The “majority” of that content was reported by Amazon, which found the material in its training data, according to an investigation by Bloomberg. Additionally, Amazon said only that it obtained inappropriate content from outside sources that was used to train its AI services and admitted that it could not provide further details on where CSAM came from.

“This is an outlier,” Fallon McNulty, executive director of NCMEC’s ​​CyberTipline, said. Bloomberg. CyberTipline is where many types of US-based companies are legally required to report suspected CSAM. “Having such a high volume coming in throughout the year begs a lot of questions about where the data comes from, and what safeguards are put in place.” He added that aside from Amazon, the AI-related reports the organization received from other companies last year included actionable data that could be forwarded to law enforcement for next steps. Because Amazon did not disclose its sources, McNulty said its reports proved “unfounded.”

“We have developed a deliberate and careful method of scanning data in the training of the foundation model, including data from the public web, in order to identify and remove the known (child sexual abuse material) and protect our customers,” an Amazon representative said in a statement to Bloomberg. The spokesperson also said that Amazon aims to over-report its numbers to NCMEC to avoid missing any cases. The company said it removed suspicious CSAM content before feeding the data to train AI models.

Questions of safety for minors have emerged as a critical concern for the artificial intelligence industry in recent months. CSAM breaks NCMEC records; compared to more than 1 million AI-related reports received by the organization last year, 2024 totaled 67,000 reports while 2023 saw only 4,700 reports.

In addition to issues such as abusive content used to train models, AI chatbots have also been involved in many dangerous or tragic cases involving young users. OpenAI and Nature.AI both were sued after teenagers planned their suicide on the companies’ platforms. Meta was also sued for alleged failures to protect teenage users from sexually explicit chatbots.



Source link

  • Related Posts

    Music publishers sue Anthropic for $3 billion over ‘flagrant piracy’

    A group of music publishers led by Concord Music Group and Universal Music Group anthropic, . The lawsuit accuses the AI ​​company of illegally downloading more than 20,000 copyrighted songs,…

    Today’s NYT Connections: Sports Edition Hint, Answer for January 30 #494

    Looking for Latest regular responses to Connections? Click here for today’s Connection hintsas well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands…

    Leave a Reply

    Your email address will not be published. Required fields are marked *