This is what has just happened in the United States, before P. Kevin Castel, Judge of the District Court, S.D. New York in the case of Mata v. Avianca, Inc (1:22-cv-01461). The plaintiff is represented by Steven A. Schwartz, an attorney with 30 years of experience who used the artificial intelligence program ChatGPT to conduct his legal research, the results of which he included in his affidavit. In the documents that he filled in support of his affidavit, he cites eight court decisions that he believes are relevant to his case: #1 Exhibit Varghese v. China Southern Airlines, #2 Shaboon v. Egypt Air, #3 Peterson v. Iran Air, #4 Martinez v. Delta Airlines, #5 Estate of Durden v. KLM Royal Dutch Airlines, #6 Ehrlich v. American Airlines, #7 Miller v. United Airlines, Inc, #8 Ttivelloni-Lorenzi v. Pan American World Airways, Inc (LoDuca, Peter) (Entered: 04/25/2023).
However, neither opposing counsel nor the judge himself were able to locate the decisions and citations cited and summarized in the document, so Steven A. Schwartz was asked to produce the cited decisions.
To do this, he again contacted ChatGPT, who provided him with the full decisions, along with their contents and full details of the cases cited.
However, after painstaking research, it emerged that none of the decisions had ever existed: Chat GPT had made everything up, from the names of the cases to the wording of the decisions!
The case will be heard on 8 June; with Steven A. Steven A. Schwartz has until 6 June to explain why he ought not be sanctioned for the use of a false and fraudulent notarization in his affidavit, the submission to the Court of copies of non-existent judicial opinions annexed to the Affidavit.
This hearing is likely to cost him dearly for citing references without having checked them, with all the consequences that can be imagined.
There are two lessons to be learned from this case:
1) We are faced with the dilemma of having to choose between speed and security, or between ease and security. Proof that this choice does not only exist in the field of IT security, but in all aspects of life, both private and professional: security must never be sacrificed for ease or speed.
2) ChatGPT is an excellent tool which can be of invaluable assistance but which, like us, can make mistakes or even imagine things without malicious intent, which is still the difference between us and AI. In this case, ChatGPT will certainly have been able to analyse the request as being to create cases similar to the one submitted to it. It remains to be seen what the question was and how it was formulated before it can be blamed. Especially as in this case ChatGPT created the content of the decisions in response to a second request.
Once again, the lessons to be learnt from this misadventure are that we must apply the ZERO TRUST rule in everything we do, and not just in the field of IT security, and above all that we must always check our sources before quoting them.
Source :https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/?order_by=desc
Comments
Post a Comment