The UK government’s use of artificial intelligence (AI) tools, specifically a proprietary chatbot called Redbox, has raised concerns about transparency and accuracy in decision-making processes. Thousands of civil servants, including those working directly with Prime Minister Keir Starmer, have been using this AI tool to carry out their work. However, the government has been reluctant to disclose the specifics of how Redbox is being utilized, leading to questions about the quality of information being used in government operations.
New Scientist was able to obtain ChatGPT logs under freedom of information (FOI) legislation, revealing that Redbox, a generative AI tool developed in-house, is being used by government staff to interrogate documents and generate first drafts of briefings. The tool has shown promise in significantly reducing the time needed to synthesize information, with one civil servant claiming to have processed 50 documents in a matter of seconds.
Despite requests for transparency, most government departments either denied using Redbox or refused to provide transcripts of their interactions with the AI tool, citing FOI requests as “vexatious.” Only the Cabinet Office and the Department for Business and Trade provided limited information about their use of Redbox, with the former stating that 3000 employees had engaged in 30,000 chats with the AI tool.
Concerns about the use of generative AI tools, such as Redbox, stem from issues related to bias and accuracy. The black box nature of these tools makes it challenging to assess how they reach specific outputs, raising questions about the reliability of information being generated. Experts like Catherine Flick from the University of Staffordshire express concerns about the lack of transparency surrounding the government’s use of AI tools, emphasizing the importance of understanding decision-making processes.
The Treasury’s response to the FOI request further highlights the lack of transparency, with the department stating that staff do not have access to Redbox and that prompt history is not retained for GPT tools internally available within the Treasury. This lack of record-keeping makes it difficult to replicate decision-making processes and understand the basis for government decisions.
Overall, the government’s use of AI tools like Redbox underscores the need for greater transparency and accountability in decision-making processes. As the UK aims to position itself as a world leader in artificial intelligence, ensuring the responsible and ethical use of these technologies will be paramount in maintaining public trust and confidence. It is surprising to learn that the government claims it cannot retrieve prompts inputted into its internal GPT systems. While courts have ruled that public bodies are not required to keep public records prior to archiving, experts argue that maintaining these records is crucial for good information governance, especially when they are used to develop or inform policy.
Data protection expert Tim Turner acknowledges that the Treasury is within its rights not to retain AI prompts under Freedom of Information (FOI) laws. He explains, “I think that unless there’s a specific legal or civil service rule about the nature of the data, they can do this.”
Despite the legal standpoint, there is a growing concern about the lack of transparency and accountability in the government’s handling of data. By failing to retain records of AI prompts, there is a risk of losing valuable insights that could have been used to shape important policies. This raises questions about the government’s commitment to good governance practices and the potential impact on decision-making processes.
In today’s digital age, data plays a crucial role in shaping public policies and ensuring accountability. Without proper record-keeping, there is a danger of information being lost or manipulated, leading to potential gaps in understanding and decision-making. As such, it is essential for public bodies to prioritize information governance and retain records that could have a significant impact on policy development.
While the government may argue that it is not obligated to keep AI prompts under FOI laws, it is important to consider the broader implications of this decision. Transparency and accountability are essential pillars of a functioning democracy, and maintaining records of AI prompts is a step towards ensuring that these principles are upheld.
In conclusion, the government’s stance on not retaining AI prompts raises concerns about the transparency and accountability of its decision-making processes. While legal experts may argue that public bodies are not required to keep such records, the broader implications on information governance and policy development cannot be ignored. It is crucial for public bodies to prioritize good governance practices and retain records that could have a significant impact on shaping policies for the benefit of society.