Catenaa, Tuesday, October 14, 202- Deloitte Australia has come under fire after admitting that artificial intelligence was used to generate large sections of a $440,000 government-commissioned report that contained fabricated citations, misquotes, and fictitious references.
A Deloitte spokesperson said the matter has been resolved directly with the client. The department confirmed the refund is underway and indicated that future consultancy contracts could include stricter rules on AI-generated content.
The revelation followed a review by University of Sydney law lecturer Chris Rudge, who uncovered more than 20 errors, including false book titles, incorrect judicial quotes, and non-existent legal cases.
The report, produced for the Department of Employment and Workplace Relations, was part of the Albanese government’s post-Robodebt welfare compliance review.
Deloitte confirmed the report and has since refunded $97,000 to the federal government.
Officials criticized the firm for breaching quality standards and failing to declare the use of generative AI tools.
In Senate estimates, senior government representatives described the errors as “unacceptable,” pledging to tighten oversight and require consultants to disclose when AI contributes to official documents.
Deloitte, which secured nearly $58 million in federal contracts this year, has declined to reveal how much of the report was machine-generated.
The controversy has intensified scrutiny over the reliance on private consultancies and the use of generative AI in official research, with analysts warning that “AI hallucinations” could undermine policymaking if left unchecked.
The government said it would review consultancy protocols to ensure all future submissions meet transparency and verification standards.
Recent US cases also reveal risks from generative AI in court documents, including fabricated citations and false content.
In Mata v. Avianca (2023), lawyers cited non-existent authorities, drawing fines and sanctions. Similar cases in New York prompted courts to condemn AI-generated fabrications as bad faith.
Judges have also been scrutinized for drafting error-laden orders using AI, prompting inquiries from Senate Judiciary Committee Chairman Chuck Grassley.
