OpenAI, the company behind the widely popular ChatGPT AI model, is facing a new lawsuit over the data used to train its products. Filed on June 28, the class action lawsuit names OpenAI and its partner Microsoft as defendants and alleges that the company utilized “stolen data” to develop ChatGPT 3.5, ChatGPT 4, DALL-E, and VALL-E. The plaintiffs claim that OpenAI collected data from “millions of unsuspecting consumers worldwide,” including children, without their consent.
According to the lawsuit, OpenAI is accused of harvesting vast amounts of personal data from the internet, including private conversations and medical information, without obtaining permission from users. The plaintiffs argue that OpenAI’s chatbot is able to replicate human language by using this stolen data. The lawsuit also highlights a list of private information that OpenAI allegedly collects, stores, tracks, and shares, which includes social media data, cookies, keystrokes, payment information, and more. It further claims that OpenAI collects data from applications integrated with GPT-4, such as Snapchat, Spotify, and Stripe, to gather image-related data, music preferences, and financial information, respectively.
The plaintiffs are demanding transparency from OpenAI regarding its data collection practices, asking the company to disclose what data it collects, where it comes from, and how it is used. They also seek compensation for the stolen data on behalf of themselves and other affected individuals. Furthermore, the lawsuit calls for OpenAI to provide an option for users to opt out of data collection and to cease the “illegal” scraping of internet data.
This is not the first legal action taken against OpenAI. Earlier this month, the company was sued over misinformation generated by ChatGPT about an individual. These lawsuits highlight the growing concerns surrounding the use of AI and the need for companies to ensure ethical practices and user consent in their AI development and deployment.
As AI technology continues to advance, it is crucial for companies like OpenAI to prioritize human oversight and transparency to maintain the trust of users. The responsible development and use of AI can bring immense benefits, but it must be accompanied by safeguards and accountability to protect individuals’ privacy and prevent misuse.