Apple Joins the List of Companies Restricting Usage of ChatGPT and Other AI Tools
Generative AI models, such as ChatGPT, have been making headlines recently due to concerns over data privacy. These models use user interactions to refine their algorithms and improve their performance. However, even confidential information in prompts could potentially be used to further train the model, leading to fears over unintentional release of private information.
OpenAI, the company behind ChatGPT, has attempted to address these concerns by releasing a feature that allows users to turn off their chat history. This gives users more control over their own data, allowing them to choose what chats can be used to train OpenAI’s models or not.
Despite these efforts, many companies have restricted usage of ChatGPT by their employees. Verizon, JPMorgan Chase, and Amazon are just a few examples. Now, Apple has joined the list.
According to documents reviewed by the Wall Street Journal, ChatGPT and other external AI tools, such as Microsoft-owned Github Copilot, have been restricted for some Apple employees. The worries arise from the potential for unintentional release of private information when utilizing these models, which has happened before.
The most recent example of such a data leak occurred on March 20th, when the ChatGPT platform experienced an outage that allowed some users to see titles from other users’ chat history. This event caused Italy to temporarily ban ChatGPT.
Despite these concerns, OpenAI has continued to roll out new features for ChatGPT. In fact, the company recently released a free ChatGPT app for iPhones. The question remains, however, whether the app lives up to the hype.
A recent poll found that most Americans believe AI threatens humanity. This sentiment is likely fueled by concerns over data privacy, as well as fears over job displacement and other potential negative consequences of AI.
As generative AI models continue to improve and become more widespread, it is clear that data privacy will remain a major challenge. Companies like OpenAI will need to continue to address these concerns and give users more control over their own data in order to build trust in these powerful tools.