A recent case in New York has highlighted the potential pitfalls of relying on artificial intelligence (AI) for legal research. The case involved a passenger who sued Avianca airlines after a metal food and drink trolley allegedly injured his knee during a flight from El Salvador to New York. The airline asked for the case to be dismissed on the grounds that the statute of limitations had expired, but the passenger’s lawyers argued that the lawsuit should continue, citing several previous court cases as precedents. However, it was later discovered that the cases cited did not actually exist – they had been fabricated by an AI program called ChatGPT, which the lawyers had used to conduct their research.
The lawyer responsible for the brief, Steven A Schwartz of the firm Levidow, Levidow & Oberman, admitted in an affidavit that he had used the AI program to do his legal research, but had been unaware of the possibility that its content could be false. Schwartz, who has practiced law in New York for three decades, had never used ChatGPT before and had asked the program to verify that the cases were real, to which it had responded “yes”.
The incident highlights the current fascination with AI and the risks of over-reliance on the technology. Many individuals have been taken in by “chatbots” – programs that make statistical predictions of the most likely word to be appended to the sentence they are composing. However, the situation is even more surreal in other areas of the AI industry. A large number of tech luminaries recently signed a declaration calling for mitigating the risk of extinction from AI to be a global priority alongside other societal-scale risks such as pandemics and nuclear war. Many of these individuals are researchers in the field of machine learning, including employees of large tech companies. Some of the signatories were even invited to the White House to share their fears about the dangers of AI with the president and vice-president, and one signatory made a pitch to the US Senate calling for regulatory intervention by governments to mitigate the risks of increasingly powerful models.
However, the question that never seems to be asked is why these individuals continue to build technology that they believe might be an existential threat to humanity. If it is so dangerous, why not stop and do something else? The answer is that they cannot stop, because they are all servants of AIs that are even more powerful than the technology itself – the corporations for which they work. These superintelligent machines exist to achieve only one objective: the maximisation of shareholder value. If humanistic scruples get in the way of that objective, then so much the worse for humanity.
The incident involving ChatGPT raises important questions about the reliability of AI and the potential risks of over-reliance on the technology. While AI has many benefits, it is important to remember that it is only as reliable as the data it is fed. As such, it is essential to remain vigilant and to use human judgment to verify the accuracy of AI-generated information. Furthermore, the incident highlights the need for greater transparency and accountability in the AI industry, particularly with regard to the potential risks associated with increasingly powerful models. Ultimately, the responsibility for ensuring the safe and ethical development of AI lies with all stakeholders, including developers, regulators, and users.
In conclusion, the incident involving ChatGPT serves as a cautionary tale about the potential risks of over-reliance on AI and the importance of human judgment in verifying the accuracy of AI-generated information. As the AI industry continues to grow and evolve, it is essential that all stakeholders remain vigilant and work together to ensure the safe and ethical development of this powerful technology.