Georgia radio host, Mark Walters, has filed a defamation lawsuit against OpenAI, the research organization behind the widely used generative AI model, ChatGPT. The lawsuit claims that the AI chatbot spread false information about Walters, accusing him of embezzling money. This is the first defamation lawsuit against OpenAI, as reported by Bloomberg Law.
According to the lawsuit, the misinformation began when Fred Riehl, the editor-in-chief of gun publication AnmoLand, requested ChatGPT for a summary of the Second Amendment Foundation v. Ferguson case as background for a case he was reporting on. ChatGPT provided Riehl with a summary that stated that Alan Gottlieb, the Second Amendment Foundation’s (SAF) founder, accused Walters of “defrauding and embezzling funds from the SAF.”
Furthermore, the chatbot claimed that in the case, the complaint alleged that Walters, “misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership” when he acted as the SFA’s CFO and treasurer. However, in Walter’s lawsuit, he claims that every single fact in the summary is false.
Walters was never a party involved in the lawsuit, was never accused of defrauding and embezzling funds from the SAF, and never held a position as treasurer or CFO, according to the lawsuit. Walters is seeking compensation from OpenAI through general and punitive damages, as well as reimbursement for the expenses incurred during the lawsuit.
Generative AI models, such as ChatGPT, are known to generate mistakes or “hallucinations.” As a result, they generally come with clearly displayed disclaimers disclosing this problem. However, what would you do if, despite these warnings, you saw the AI chatbot spreading misinformation about you? This raises questions about who should be held liable and whether the website’s disclaimers about hallucinations are enough to remove liability, even if harm is brought to someone.
The outcome of the legal case will have a significant impact in establishing a standard in the emerging field of generative AI. OpenAI has been at the forefront of developing AI models and promoting responsible AI practices. In a recent blog post, OpenAI announced that they found a way to make AI models more logical and avoid hallucinations. This is a step forward in ensuring that AI models are reliable and trustworthy.
As AI models become more prevalent, it is important to establish clear guidelines and regulations to ensure that they are used ethically and responsibly. OpenAI has been a leader in promoting responsible AI practices and has called for a collaborative effort to tackle AI regulation. The legal case against OpenAI highlights the need for accountability and transparency in the development and use of AI models.
In conclusion, the defamation lawsuit against OpenAI by Mark Walters raises important questions about the liability of AI models and the need for responsible AI practices. The outcome of the legal case will have a significant impact on establishing a standard in the emerging field of generative AI. As AI models become more prevalent, it is important to establish clear guidelines and regulations to ensure that they are used ethically and responsibly. OpenAI has been a leader in promoting responsible AI practices and has called for a collaborative effort to tackle AI regulation.