A senior lawyer in Australia has apologized to a judge for filing submissions in a murder case that fake quotes included and non-existent judgments in the cases he produced artificial intelligence.
The error by the Supreme Court of the State of Victoria is another in a litany of disasters that artificial intelligence has caused in justice systems around the world.
Defense attorney Rishi Nathwani, who holds the prestigious legal title of king’s counsel, has taken “full responsibility” for submitting incorrect information in filings in the case of a teenager accused of murder, according to court documents seen by The Associated Press on Friday.
“We are deeply sorry and embarrassed by what happened,” Nathwani told Judge James Elliott on Wednesday, on behalf of the defense team.
The errors caused a 24-hour delay in resolving the case, which Elliott had hoped would be concluded on Wednesday. On Thursday, Elliott ruled that Nathwani’s client, who cannot be identified because he is a minor, was not guilty of manslaughter by reason of mental impairment.
“At the risk of understatement, the manner in which these events unfolded is unsatisfactory,” Elliott told lawyers on Thursday.
“The court’s ability to rely on the accuracy of pleadings made by counsel is essential to the proper administration of justice,” Elliott added.
The fake filings included fabricated quotes from speeches to the state legislature and non-existent quotes from cases purportedly from the Supreme Court.
The AI-generated errors were discovered by Elliott’s associates, who could not find the cases cited and requested that defense lawyers provide copies, the Australian Broadcasting Corporation previously reported.
The attorneys admitted that the quotes “do not exist” and that the filing contained “fictitious quotes,” court documents said.
The attorneys explained that they checked the initial quotes were correct and wrongly assumed the others would be correct as well.
The submissions were also sent to prosecutor Daniel Porceddu, who did not verify their accuracy.
The judge noted that the Supreme Court last year issued guidelines on how lawyers use AI.
“It is not acceptable for artificial intelligence to be used unless the product of that use has been independently and thoroughly verified,” Elliott said.
Court documents do not identify the generative artificial intelligence system used by the lawyers.
In a similar case in the United States in 2023, a federal judge fined two lawyers and a law firm $5,000 after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim.
Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why tougher sanctions were not necessary to ensure they or others would no longer allow AI tools to prompt them to produce false legal history in their arguments.
Later that year, more fictitious AI-invented court rulings were cited in legal documents filed by Michael Cohen’s attorneysformer personal lawyer of US President Donald Trump. Cohen took the blame, saying he didn’t realize the Google tool he used for legal research was also capable of so-called AI hallucinations.
Britain’s High Court judge Victoria Sharp warned in June that passing off fake material as genuine could be considered contempt of court or, in “the most outrageous cases”, perverting the course of justice, which carries a maximum sentence of life in prison.
The use of artificial intelligence has entered American courtrooms in other ways. In April 2025, a man named Jerome Dewald appeared in court in New York and filed video which it contained Avatar generated by artificial intelligence present an argument on his behalf.
A month after that, a man who died in a road accident in Arizona “spoke” during the sentencing of his killer after his family used artificial intelligence to create a video of him reading a victim impact statement.





