- Blockchain Council
- May 29, 2023
Ever since its debut, the Artificial Intelligence tool ChatGPT has captivated the world with its potential to revolutionize various aspects of our lives. People have been leveraging this powerful AI tool to streamline their daily tasks, ranging from simplifying assignments and drafting emails to expanding their linguistic horizons. However, as with any technology, there are those who seek to exploit it for illicit purposes, leading to instances of misinformation and deception.
A recent incident in New York has brought to light a disturbing case of AI misuse. A lawyer from Levidow, Levidow & Oberman, a reputable law firm, is now facing a disciplinary hearing for employing ChatGPT in their legal research. The lawyer in question submitted a document that featured hypothetical legal cases as examples, setting off a chain of events that has far-reaching consequences.
The peculiar nature of this crime was revealed when the lawyer’s firm filed a brief citing several previous court cases to bolster their argument in a personal injury lawsuit against an airline. However, the opposing party’s legal team wrote a letter to the judge, expressing their inability to locate some of the referenced cases mentioned in the brief.
Judge Castel, presiding over the case, promptly contacted the legal team representing the plaintiff, seeking an explanation for the discrepancies. In his message, he pointed out that six of the submitted cases appeared to be spurious judicial decisions, complete with fabricated quotes and internal citations. This revelation sent shockwaves through the legal community, raising concerns about the authenticity and reliability of AI-generated content.
Subsequently, it was discovered that the research in question was not conducted by the lawyer himself, Peter LoDuca, but rather by one of his colleagues at the law firm. Steven A Schwartz, an experienced lawyer with over three decades of practice, employed ChatGPT to identify cases that bore a resemblance to the current matter. Schwartz, in a statement, expressed deep regret for relying on ChatGPT and admitted that he had never used it for legal research before. He acknowledged his lack of awareness regarding the potential generation of false content by the AI tool.
Also read: What is Artificial Intelligence? A Step-by-Step Beginners Guide
Schwartz’s remorse was evident, as he never pledged again to “supplement” his legal research using Artificial Intelligence without thoroughly verifying its authenticity.
The revelation of this scandal took an intriguing turn when a Twitter thread showcasing the conversation between Schwartz and ChatGPT surfaced on the internet. In the exchange, Schwartz inquired about the authenticity of a case called “Varghese,” and ChatGPT responded affirmatively, providing specific details of the case and even suggesting credible sources for verification, such as LexisNexis and Westlaw.
However, despite ChatGPT’s claims, the upcoming disciplinary hearing on June 8th will determine the appropriate sanctions for Schwartz. This case has thrust ChatGPT into the spotlight, heightening both the public’s fascination and skepticism toward this new generative AI program. As the legal community eagerly awaits the outcome of the disciplinary hearing, crucial questions arise concerning the responsibility and reliability of AI tools within the realm of law.
Schwartz’s reliance on ChatGPT raises pressing concerns about the veracity of AI-generated content and its implications within the legal domain. Judge Castel’s reference to an “unprecedented circumstance” underscores the need for cautious and critical evaluation when employing AI tools for legal research. Generative AI programs like ChatGPT generate content based on extensive analysis of real-world data, making it increasingly challenging to discern between authentic and fabricated information.
Also read: The Ultimate ChatGPT Guide: All You Need to Know
Schwartz’s unwitting submission of fabricated cases in the affidavit has exposed the potential risks associated with unquestioningly accepting AI-generated content. The fact that these fictitious cases were included in a legal document intended for court scrutiny highlights the far-reaching consequences of misinformation and the need for robust verification protocols.
This scandal should serve as a wake-up call for the legal community to establish comprehensive guidelines and best practices governing the ethical use of AI tools in legal research. By fostering a culture of responsible AI usage, lawyers can ensure the integrity and authenticity of their work, preserving the foundations of the justice system.
As the world grapples with the aftermath of this legal research scandal, it is crucial to recognize the intricate relationship between humans and AI. While AI tools offer tremendous potential to augment legal research and streamline processes, they must be used judiciously and ethically. The ChatGPT incident serves as a poignant reminder of the need for human judgment and due diligence in the practice of law.
The unfolding legal research scandal involving ChatGPT has sparked a vital conversation about the responsible integration of AI in the legal profession. The upcoming disciplinary hearing will shape the future discourse surrounding AI tools, emphasizing the need for transparency, accountability, and meticulous verification of AI-generated content. By learning from this incident, the legal community can navigate the complexities of AI technology and leverage it to advance justice while preserving the highest standards of integrity.
Also read: What is Generative Artificial Intelligence