Meta, the American company behind artificial intelligence models like Llama, has revealed that Chinese researchers linked to the People’s Liberation Army (PLA) have used its technology without authorization to develop an AI system aimed at military applications.
This case, which involves the use of a Goal Flame model in a defense context, has raised concerns in both the business and national security spheres of the United States.
What is ChatBIT?
The project, known as “ChatBIT”, was developed by researchers from various Chinese institutionsincluding some directly linked to the PLA. According to an academic article reviewed in June 2023, six Chinese researchers from three institutions, including the Academy of Military Sciences (AMS) of the PLA, They used an early version of Meta’s Llama 13B model. This model was modified to meet the requirements of intelligence and decision making in the military field.
The research details that ChatBIT was designed to improve the collection and analysis of military information. The researchers explained that optimized the model for dialogue and response tasks, focusing on offering accurate and reliable information to support operational decision making in combat situations.
In addition, the results indicated that ChatBIT surpassed other artificial intelligence models in performance in the military field, reaching approximately 90% of the capacity of OpenAI’s ChatGPT-4.
Meta responds to misuse controversy
Meta has historically defended open access to its AI modelsin an attempt to foster innovation and global technological advancement. However, it imposes restrictions regarding the use of these models, explicitly prohibiting their application in areas such as military, nuclear defense, espionage and other sensitive areas that are subject to export control by the United States. Upon discovering the use of the Llama model in Chinese military applications, Meta emphasized that this activity is “unauthorized and contrary to its acceptable use policy.”
Molly Montgomery, director of public policies at Meta, highlighted that Any use of its models by the People’s Liberation Army is contrary to its terms and conditions.
The company argues that while it promotes open access, it is essential that the US government implement regulations that limit the use of advanced technologies in sectors where security risks exist.
Reaction of the US government to the advance of AI in China
Before the growing competition in the field of artificial intelligencethe United States government has intensified its efforts to monitor the advancement of AI technologies in countries considered competitors.
In October 2023, President Joe Biden signed an executive order aimed at managing the development of AI in the country, warning of the risks associated with uncontrolled innovation.
The Pentagon has also expressed concerns about the use of open source AI models by other nations. and has confirmed that it will continue to monitor the capabilities of its competitors. John Supple, a spokesman for the Department of Defense, acknowledged that open source models have benefits, but also pose certain risks in terms of national security.
Limitations of the open access policy in AI
While Meta has adopted an open access approach to drive innovation, the use of its models in military applications raises serious questions. As these are open source models, Meta lacks effective mechanisms to prevent them from being used in prohibited contextsas is the case of ChatBIT. In this sense, some critics claim that allowing access to these technologies globally puts national security interests at risk.
William Hannas, an analyst at the Center for Security and Emerging Technology at Georgetown University, has noted that international collaboration between scientists from the United States and China has facilitated Chinese researchers’ access to advanced technologies.
Hannas suggested that It is almost impossible to prevent China from accessing these “shared technological resources”, and highlighted that China’s national strategy is clearly aimed at becoming a world leader in AI by 2030.
The ChatBIT case highlights the complexity of AI regulation in a context of global competition. As Meta and other technology companies seek to foster open access to accelerate innovation, the risks associated with its misuse, especially in military applications, are increasingly evident.
Keep reading:
– OpenAI and Meta want their AI to achieve something that until now only humans could do
– Hearing aids with Artificial Intelligence: find out what Meta’s new bet is like
– WhatsApp now has artificial intelligence: how to activate it on your phone