The bitter irony of artificial intelligence undermining the very policy meant to govern it
In a deeply embarrassing turn of events that underscores the perils of unchecked AI use in government, South Africa has withdrawn its draft National Artificial Intelligence Policy after discovering that at least 6 of its 67 academic citations were AI-generated hallucinations, citing fake articles in real journals.
The scandal, which broke after an investigation by News24 found that at least six of those citations either did not exist or could not be located in recognised academic databases, has left the country without a regulatory framework for AI and raised fundamental questions about institutional capacity to govern emerging technologies.
The Discovery
Editors at the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy independently confirmed to News24 that the cited articles had never been published in their pages. The journals were real. The articles were not. Authors credited with foundational research on AI governance had never written the papers attributed to them.
The policy justified the categorisation of high-risk AI, data sovereignty frameworks and regulatory sandboxes by citing Müller Schmidt 2024 in the European Journal of Law and Technology. While both the journal and scholars with the surname Schmidt exist, the Large Language Model used to compile that policy section conflated real authors with real journals to create a synthetic paper that perfectly fit the narrative requirements.
A Policy in Ruins
The document itself was ambitious. It proposed a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, and an AI Insurance Superfund. The policy had been approved by Cabinet on March 25 and published in the Government Gazette on April 10 for public comment, with submissions open until June 10.
Now, it’s gone.
Communications and Digital Technologies Minister Solly Malatsi announced the withdrawal late Sunday evening, calling the lapse “not a mere technical issue but has compromised the integrity and credibility of the draft policy”.
“It’s a major embarrassment,” Malatsi admitted, acknowledging that the department failed to spot the fabricated references before releasing the draft.
How It Happened
The most plausible explanation, according to Communications Minister Solly Malatsi, is that the drafters used a generative AI tool and published the output without verifying a single reference.
The irony cuts deep. The South African case is distinctive not because the technology hallucinated, which is a well-documented and inherent limitation of generative AI, but because the hallucinations were published in an official government policy document that passed through Cabinet approval without anyone verifying the references.
The drafting process included civil servants, subject matter consultations, and ministerial review—yet somehow, no one checked whether the academic papers cited actually existed.
Dumisani Sondlo, the department’s AI policy lead, had previously described the policy development as “an act of acknowledging that we don’t know enough.” That acknowledgment did not extend to acknowledging that the tool being used to help draft the policy was itself unreliable.
The Fallout
Malatsi has promised consequences. “South Africans deserve better. The Department of Communications and Digital Technologies did not deliver on the standard that is acceptable for an institution entrusted with the role to lead South Africa’s digital policy environment,” he said. He committed to “consequence management for those responsible for drafting and quality assurance.”
The scandal triggered a swift cross-party political backlash. Parliament’s communications committee chair, Khusela Diko, urged Malatsi to withdraw the draft amid credibility concerns, underscoring the political pressure surrounding the policy process.
Opposition politicians rejected any suggestion that responsibility could be pinned on junior officials, arguing that the failure of due diligence sat squarely with both the department and the ministry.
A Global Problem
South Africa is far from alone in this predicament. South Africa is not alone in being tripped up by AI-generated content slipping past human supervision. The scandal has echoes in other jurisdictions—from lawyers sanctioned in New York for submitting ChatGPT-generated fake legal opinions to Deloitte refunding the Australian government over an AI-assisted report containing fabricated citations.
A study published in the journal Nature showed that over 2.5% of scientific papers published in 2025 contained at least one potentially fabricated reference, compared to 0.3% in 2024. This amounts to over 110,000 papers published in 2025 containing hallucinated citations—a staggering increase that suggests the problem is accelerating.
The Unknown Extent
Perhaps most troubling is what remains unknown. The six fake citations that News24 identified are the ones that were caught. Whether additional citations in the document’s 67 references are genuine has not been publicly confirmed.
No systematic audit of the remaining 61 citations has been announced. The document could contain more fabrications that simply haven’t been discovered yet.
What Happens Next
The withdrawal leaves South Africa without a formal AI governance framework at a time when the technology is rapidly reshaping economies and societies across the continent. A new, properly vetted policy will now have to be drafted from scratch, delaying any near-term regulatory clarity.
Malatsi has confirmed that a revised draft will undergo a more rigorous verification process before resubmission, but no timeline has been provided.
South Africa is one of the few African countries to have developed AI policy, even as the adoption of the technology continues to spread rapidly across the continent. The withdrawal means the country must now restart substantial portions of the consultation process, with no immediate timeline provided for a revised draft.
The Lesson
The scandal offers a stark lesson about the limits of artificial intelligence and the necessity of human oversight—particularly when deploying AI to create policy about AI itself.
Malatsi called the episode an “unacceptable lapse” and a clear demonstration of why vigilant human oversight over AI remains critical.
In trying to use AI to help regulate AI, South Africa’s government learned the hard way that the technology is not yet ready to be trusted without verification. The tools that promise efficiency and insight also produce convincing fiction. The difference between the two requires something AI cannot yet provide: the human ability to fact-check, to doubt, and to verify before publishing.
For a government document meant to establish trust in how South Africa would handle artificial intelligence, the hallucinated citations achieved the opposite—a loss of credibility that will take far longer to restore than it took to generate the fake references in the first place.
This article will be updated as more information becomes available.
