Space Force Bans Guardians From Using AI

By Kevin C. Neece | Published

space force

Futurism reports that Space Force personnel, known as Guardians, are now forbidden to use AI. As the branch of the United States military focused on the security of Earth from the standpoint of outer space, many of the threats against which it is designed to protect us are surprisingly terrestrial.

For example, spy satellites and nuclear missiles that might be launched into very high altitudes are under the particular preview of this branch of the military.

Space Force personnel, known as Guardians, are now forbidden to use AI

Still, it’s hard to get the idea out of our heads that the Space Force is primarily designed to protect us from an alien attack. But the latest threat the newest branch of our military has identified is generative AI. ChatGPT and similar tools are seen by the Space Force leadership as a potential security risk, in part because they are web-based.

Linda Costa, Chief technology and innovation officer for Space Force, said in an internal memo obtained by Bloomberg that, while AI presents some advantages in increasing operational speed, it must be handled carefully because of the potential risks it presents.

The caution she prescribes regarding the new technology currently requires an absolute ban on its use. Primarily, this ban is driven by concerns regarding data handling and cyber security.

Space Force Guardians of the Galaxy

This new Space Force rule is rooted in very valid concerns, as ChatGPT and other AI software aggressively mine the data that is put into it, storing all that info to further train its pattern recognition systems.

So, were any piece of classified information used in working with ChatGPT or other AI software, that information would essentially become a part of the software. OpenAI, the system that runs ChatGPT, would have that information and it would be beyond the control of the Space Force.

But branches of the United States military are not alone in the move to ban the use of AI within their auspices. Private companies have also disallowed any use of the emerging technology within their computer systems, especially after a chatbot caused an information leak at tech company Samsung. Verizon, Apple, and others have put measures in place to keep AI from intruding into their corporate networks.

Not all are happy with the Space Force decision around artificial intelligence

But Nicolas Chaillan, the Defense Department’s former chief software officer, is not happy about the Space Force’s decision to ban AI, at least for the time being. That might have something to do with his role as founder and CEO of Ask Sage, a chatbot company.

He believes his software is well within the bounds set by Space Force’s security requirements, and that the defense department includes some 10,000 of his customers.

Despite Chaillan’s disapproval, however, it does seem that the Space Force is embracing a prudent policy of caution regarding the emerging technology of generative AI. Whether the technology could be helpful in the writing of reports or the streamlining of workflows, the current form this type of software takes does it seem to pose possible security risks, if not guaranteed ones.

If we are already seeing problems emerging in the private sector, it only logically follows that governmental bodies like branches of the US Military could also find themselves at risk.