- A South African legal team submitted AI-generated, fictional case citations in court.
- Judges and legal bodies condemned the lawyers’ failure to verify information.
- Experts call for clearer AI usage guidelines and ethical responsibility in legal research.
In a troubling courtroom episode in KwaZulu-Natal, South African lawyers representing former mayor Philani Godfrey Mavundla submitted a legal brief citing fabricated case law generated by ChatGPT.
This incident is part of a growing pattern in South Africa, with at least three recent cases involving AI-generated misinformation in court documents.
Blind Trust in Bots: How AI Misuse Shook South Africa’s Legal System
The misuse of AI in the legal sector is gaining the attention of regulatory bodies like the Legal Practice Council (LPC), which is currently investigating the case and warning practitioners against blindly relying on generative AI tools. Despite the absence of new ethical rules, the LPC believes current codes are sufficient, provided legal professionals verify their sources. To support this, it offers free access to its legal library and conducts webinars to promote responsible tech use.
Lawyers have long embraced technology—from digital research databases to grammar checkers—but the integration of generative AI introduces new challenges. Unlike traditional tools, AI platforms can create convincing but entirely fictitious outputs, known as “hallucinations.” Without human oversight, these hallucinations can become professional liabilities, damaging credibility and risking judicial penalties.
Human rights lawyer Mbekezeli Benjamin warns that unverified AI-generated content weakens judicial integrity. Judges often rely heavily on legal submissions, and AI errors can introduce doubt and mistrust into court proceedings. He argues for stricter professional regulations and suggests that relying on unverified AI outputs should be grounds for disciplinary action, including fines or disbarment.
This unfolding situation places pressure on legal institutions to adapt swiftly. While some experts resist specific AI legislation, there is consensus that the legal profession must evolve to meet the risks posed by new technologies. Building digital literacy, fostering ethical awareness, and reinforcing the responsibility lawyers have to the truth are key steps toward preserving trust in the legal system.
The South African courtroom incident underscores a vital truth: AI is a tool, not a substitute for professional judgment. As its influence grows, so must legal responsibility.
“The advancement of technology is not a license for the abandonment of responsibility.” — This insight rings true as the legal profession confronts the ethical minefield of AI integration.



