Alphabet, the parent company of Google, has warned its employees to be cautious when sharing confidential information with AI chatbots, including its very own Google Bard. The reason for this is that the information shared with these bots is subsequently stored by the companies that own the technology. Although this is not a new revelation, it is a timely reminder that sharing private or confidential information anywhere online is not advisable. The AI chatbots are based on large language models (LLMs) that are in constant training and the data is used to train the bots and improve the technology. The companies behind AI chatbots also store the data, which could be visible to their employees.
Google Bard is an AI chatbot that has been developed by Google. When interacting with Bard, Google collects conversations, location, feedback, and usage information. This data is used to provide, improve, and develop Google products, services, and machine-learning technologies. Google selects a subset of conversations as samples to be reviewed by trained reviewers and kept for up to three years. However, Google clarifies that this data does not include information that can be used to identify individuals or others in the Bard conversations.
OpenAI, another AI company, also reviews ChatGPT conversations to help improve its systems and to ensure the content complies with its policies and safety requirements. As AI chatbots continue to evolve, it is essential to ensure that the data shared with them is not sensitive or confidential.
Google has been at the forefront of AI development for years, from buying and selling Boston Dynamics to making scientific achievements through DeepMind. Recently, the company made AI the main event at this year’s Google I/O, following the launch of its AI chatbot, Google Bard. The company has been investing heavily in AI technology, and it is no surprise that they are now advising their employees to be careful about what they say to these AI bots.
The use of AI chatbots has become increasingly popular in recent years, with companies using them to improve customer service, automate repetitive tasks, and provide personalized recommendations. However, the use of AI chatbots has also raised concerns about privacy and security. As these bots are based on large language models, they are constantly learning and evolving, which means that the data shared with them could potentially be used for purposes other than those intended.
In conclusion, Alphabet’s warning to its employees is a timely reminder that sharing confidential information with AI chatbots is not advisable. As AI technology continues to evolve, it is essential to ensure that the data shared with these bots is not sensitive or confidential. The companies behind AI chatbots must also ensure that the data is stored securely and is not visible to their employees. As AI chatbots become more prevalent in our daily lives, it is important to remain vigilant about the information we share with them.