The US Space Force has temporarily halted the use of web-based generative artificial intelligence (AI) tools, such as ChatGPT, due to data security concerns. According to a Memo sent in September, addressing Guardians (the Space Force workforce), states that personnel are prohibited from using these AI tools on government computers until formal approval is granted by the Chief Technology and Innovation Office.
The reports also suggest that the temporary ban is due to data aggregation risks.
Space Force’s Chief Technology and Innovation Officer Lisa Costa reportedly wrote in the memo, “Generative AI will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed but the concerns over cybersecurity and data handling are too great to ignore.
As per available sources, Costa’s memo also mentions the formation of a generative AI task force with other Pentagon offices. This task force aims to explore responsible and strategic ways to utilize the technology. Further guidance on Space Force’s use of generative AI is expected to be released in the coming month.
Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life.
Despite the potential benefits of using AI in cyber security, there are also several challenges and risks associated with its use. Since the public launch of ChatGPT on the GPT-3 natural language large language model (LLM) in November last year, researchers have been actively investigating the potentially negative aspects of generative AI.
The discussed concerns also include data breach and privacy concerns. Considering these risk factors, many institutions and companies nowadays are planning for proper regulation to reduce the dependency on AI.