The High Court of England and Wales Issues Warning on Misuse of Artificial Intelligence in Legal Work
In a recent ruling that linked two cases, Judge Victoria Sharp of the High Court of England and Wales emphasized the need for lawyers to take stronger measures to prevent the misuse of artificial intelligence in their work.
Judge Sharp specifically highlighted the limitations of generative AI tools like ChatGPT, stating that they are not reliable for conducting legal research. While these tools can produce coherent and plausible responses to prompts, there is a risk that these responses may be inaccurate or completely false.
However, this does not mean that lawyers cannot utilize AI in their research. Judge Sharp emphasized that lawyers have a professional duty to verify the accuracy of AI-generated research by consulting authoritative sources before incorporating it into their professional work.
The judge raised concerns about instances where lawyers, including those representing major AI platforms, have cited AI-generated falsehoods in court cases. She emphasized the importance of ensuring that lawyers adhere to professional guidelines and fulfill their obligations to the court.
One of the cases mentioned in the ruling involved a lawyer who submitted a filing with numerous citations, many of which were either non-existent or irrelevant to the case. In another case, a lawyer cited non-existent cases in a court filing related to an eviction matter.
Judge Sharp warned that lawyers who fail to comply with their professional obligations regarding AI-generated research risk facing severe sanctions. Both lawyers involved in the cited cases were reported to professional regulators for further investigation.
The judge emphasized that the court has the authority to impose various penalties on lawyers who do not meet their duties, ranging from public admonition to contempt proceedings or referral to law enforcement authorities.