Amazon discovered a ‘high volume’ of CSAM in its AI training data but didn’t say where it came from


The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material (CSAM) in 2025. The “majority” of that content was reported by Amazon, which found the material in its training data, according to an investigation by Bloomberg. Additionally, Amazon said only that it obtained inappropriate content from outside sources that was used to train its AI services and admitted that it could not provide further details on where CSAM came from.

“This is an outlier,” Fallon McNulty, executive director of NCMEC’s ​​CyberTipline, said. Bloomberg. CyberTipline is where many types of US-based companies are legally required to report suspected CSAM. “Having such a high volume coming in throughout the year begs a lot of questions about where the data comes from, and what safeguards are put in place.” He added that aside from Amazon, the AI-related reports the organization received from other companies last year included actionable data that could be forwarded to law enforcement for next steps. Because Amazon did not disclose its sources, McNulty said its reports proved “unfounded.”

“We have developed a deliberate and careful method of scanning data in the training of the foundation model, including data from the public web, in order to identify and remove the known (child sexual abuse material) and protect our customers,” an Amazon representative said in a statement to Bloomberg. The spokesperson also said that Amazon aims to over-report its numbers to NCMEC to avoid missing any cases. The company said it removed suspicious CSAM content before feeding the data to train AI models.

Questions of safety for minors have emerged as a critical concern for the artificial intelligence industry in recent months. CSAM breaks NCMEC records; compared to more than 1 million AI-related reports received by the organization last year, 2024 totaled 67,000 reports while 2023 saw only 4,700 reports.

In addition to issues such as abusive content used to train models, AI chatbots have also been involved in many dangerous or tragic cases involving young users. OpenAI and Nature.AI both were sued after teenagers planned their suicide on the companies’ platforms. Meta was also sued for alleged failures to protect teenage users from sexually explicit chatbots.



Source link

  • Related Posts

    Nvidia’s Campaign to Sell AI Chips to China Finally Pays Off

    Jensen Huang for sure seems to be having a lot of fun in China this week. the CEO of Nvidia spotted going for a leisurely bike ride and browsing a…

    The iPhone had its best quarter ever

    Apple had a great Q1, although iPhone sales were the real standout. The company reported that its signature device had its best quarter yet, thanks partially to a surge in…

    Leave a Reply

    Your email address will not be published. Required fields are marked *