Staff in the home affairs department have been unable to recall the prompts they entered into ChatGPT during experiments with the AI chatbot, and documents indicate that no real-time records were kept. The department had previously stated that it was using the tool for “experimentation and learning purposes” in four divisions, with use being “coordinated and monitored.” However, records obtained under freedom of information law suggest that there were no contemporaneous records of all questions or prompts entered into ChatGPT or other tools during the tests. This lack of record-keeping raises concerns about security and the reliability of the responses provided.
Guardian Australia obtained a questionnaire that the department provided in response to a request for all prompts used between November 2022 and May 2023. The questionnaire asked staff members to recall the queries they used and for what purpose. However, many respondents stated that they could not remember the prompts exactly or that it was too long ago to recall. One staff member mentioned that the prompts were generic and did not involve any details about departmental infrastructure or data. These findings cast doubt on the claim that the use of ChatGPT was being monitored and raise questions about the lack of record-keeping measures.
David Shoebridge, a Greens senator, expressed concern about the lack of information being recorded. He argued that the absence of a system to record the prompts entered into ChatGPT raises further concerns about the department’s failure to establish clear safeguards and protections. A spokesperson for the Home Affairs department acknowledged that staff were not restricted from accessing ChatGPT for experimentation until mid-May 2023 but emphasized that unauthorized disclosure of official information was prohibited. The spokesperson also stated that access to ChatGPT from departmental systems remained suspended, with no exceptions granted to date.
It is important to note that there have been no identified instances of ChatGPT being used for department decision making. Many of the prompts mentioned by staff members were related to computer programming, such as debugging code errors and requesting scripts for work. Some staff members used ChatGPT to find errors and correct scripts but noted that the answers were not always 100% accurate. In one division, the tool was used for business-related topics and troubleshooting client-side and server-side scripts. In another division, ChatGPT was used for technical research, including questions about the UK government’s supply chain security policies. In the refugee humanitarian and settlement division, it was used to generate discussion questions for a briefing about a non-profit organization from another country. The fourth division where experiments took place was the data and economic analysis center.
Security concerns surrounding chatbots based on large language models include the risk that sensitive information entered as prompts could be incorporated into the model’s underlying dataset, potentially exposing it to other users’ questions in the future. In May, the home affairs secretary, Michael Pezzullo, stated that the use of ChatGPT or Bard was “barred and suspended” and called for a whole-of-government approach to the technology. He emphasized the need for officers to be aware of the risks and for a centralized approach to ensure responsible use.
In conclusion, the lack of records regarding the prompts entered into ChatGPT during experiments in the home affairs department raises concerns about security and the reliability of the responses provided. The absence of a system to record this information calls into question the department’s claim of monitoring the use of ChatGPT. While staff members mentioned using the tool for various purposes, there have been no identified instances of it being used for decision making within the department. The concerns surrounding the potential exposure of sensitive information highlight the need for clear safeguards and a whole-of-government approach to the use of AI chatbots.