OpenAI, Anthropic, and Google Urge Action as US AI Lead Diminishes

American artificial intelligence companies have warned Openai, anthropic and Google the federal government that the American technological leadership in artificial intelligence is “not wide and narrow” because Chinese models such as Deepseek R1 show increasing capabilities, according to documents provided to the US government in response to requesting information on developing the AI’s action plan.
These recent presentations of March 2025 highlight urgent concerns about the dangers of national security, economic competitiveness, and the need for strategic regulatory frameworks to preserve the American leadership in developing artificial intelligence amid the increasing global competition and the state -backed progress in this field. Antarbur and Google provided their responses on March 6, 2025, while the Openai was followed on March 13, 2025.
China and Deepseek R1 challenge
The appearance of the Chinese Deepseek R1 model has caused great concern among the main developers of the United States, who are not considered a superiority of American technology, but as a convincing evidence that the technological gap is closing quickly.
Openai explicitly warns that “Deepseek shows that our progress is not wide and narrow,” describes the model as “simultaneously called the state, controlling the state, and freely available”-a mixture they consider a special threat to American interests and the development of the global AI.
According to Openai’s analysis, Deepseek offers similar risks to those associated with Chinese Huawei wireless customers. “As with Huawei, there is a great risk of building on Deepseek models in critical infrastructure and other highly risk use cases given the possibility of Deepseek to be forced by CCP to address its models to cause harm,” Openai explained in its presentation.
The company also raised concerns about the privacy of data and security, noting that Chinese regulations may require Deepseek to share user data with the government. This can enable the Chinese Communist Party to develop the most advanced artificial intelligence systems with the state’s interests while bargaining on individual privacy.
Human evaluation focuses heavily on the effects of biological security. Their evaluation revealed that Deepsek R1 “complied with the answer to most of the biological weapons questions, even when formulating it clearly with a malicious intention.” This preparation is suitable for providing possible information, unlike safety measures implemented by the leading American models.
“While America maintains the initiative of artificial intelligence today, Deepseek explains that our progress is not wide and narrow,” Anthropor replied to present it, which enhances the urgent tone of warnings.
Both companies compete in ideology, as Openai described an “democratic intelligence” that is led by America and “the” Chinese “authoritarian AI. They suggest that Depsik’s readiness to generate instructions for “illegal and harmful activities such as identity fraud and theft of intellectual property” reflects the various moral methods mainly to develop artificial intelligence between the two countries.
There is no doubt that the appearance of Deepsek R1 is an important landmark in the global artificial intelligence race, which indicates the increasing capabilities of China despite the US export controls on advanced semiconductors and highlighting the urgency of coordinated government measures to maintain the American leadership in this field.
The effects of national security
The submitted by all three companies emphasize major concerns about national security arising from advanced artificial intelligence models, although they deal with these risks from different angles.
Openai’s warnings focus greatly on the possibility of CCP effect on Chinese artificial intelligence models such as Deepseek. The company emphasizes that Chinese regulations may force Deepseek to “waive critical infrastructure and sensitive applications” and require the participation of user data with the government. This data sharing can enable the development of the most advanced artificial intelligence systems with the interests of the Chinese state, creating all the issues of immediate privacy and long -term security threats.
Antarubor fears focus on the risks of biological security, which are presented by advanced artificial intelligence capabilities, regardless of their country of origin. In a particularly disturbing disclosure, Anthropor revealed that “our recent system, Claude 3.7 Sonnet, explains regarding improvements in its ability to support aspects of biological weapons development.” This explicit acceptance of the dual -use nature of advanced artificial intelligence systems and the need for strong guarantees.
Anthropor also determined what they describe as a “organizational gap in the restrictions of American chips” related to H20 Fires in NVIDIA. While these chips meet the low performance requirements for Chinese export, they “excel in generating the text (” samples “) – a basic component of advanced enhanced learning methodologies is very important to developments in the current border model.” Antarubor urged “immediate organizational action” to address this potential weakness in the current export control.
Google, while recognizing the risks of artificial intelligence, calls for a more balanced approach to export controls. The company warns that the rules for exporting the current artificial intelligence “may undermine economic competitive goals … by imposing inappropriate burdens on American cloud service providers.” Instead, Google recommends “balanced export controls that protect national security with the empowerment of American exports and global business operations.”
The three companies emphasize the need to enhance government evaluation capabilities. Anthropor specifically calls for “the ability of the federal government to test and evaluate the strong artificial intelligence models of national security capabilities” to better understand the poor use by opponents. This would include maintaining the Institute of Artificial Intelligence Intelligence Instant intelligence, directing NIST to develop security assessments, and collecting teams of multidisciplinary experts.
Comparison schedule: Openai, human, Google
The field of focus | Openai | man | |
The main anxiety | Political and economic threats of state artificial intelligence | The risks of vital security from advanced models | Maintaining innovation with the security budget |
View on Deepseek R1 | “A government is available to the state, controlled by the state, and is available freely” with Huawei -like risks | Ready to answer “biological weapons questions” with a harmful intent | The less specific focus on Deepseek, and more on the broader competition |
Priority of national security | CCP effect and data security risks | Bio security threats and chips export | Balanced export controls that do not fill us with service providers |
Organizational approach | Voluntary partnership with the federal government; One contact point | Reinforced government test capacity; Steel export controls | “Federal Federal Framework”; Group governance |
Focus on infrastructure | The government’s adoption of artificial intelligence tools | Energy expansion (50GW by 2027) to develop artificial intelligence | Energy Coordinator, allowing reform |
Distinguished recommendation | Frame for the gradual export monitoring that enhances “democratic intelligence” | The immediate organizational procedure on the NVIDIA H20 chips that was exported to China | Industry access to the available data for just learning |
Economic competitive strategies
Infrastructure requirements, especially energy needs, are high, as a decisive factor in maintaining American artificial intelligence. Antarbur warned that “by 2027, one -border model of AI model will require network computing groups that attract about five gigawatts of energy.” They suggested an ambitious national goal to build an additional 50 Gigawatts from the allocated authority specifically for the artificial intelligence industry by 2027, as well as measures to simplify permits and accelerate the transfer line approvals.
Openai again to compete as an ideological competition between “Democratic Intelligence” and “AI, the authoritarian AI” built by CCP. Their vision of “AI Democracy” emphasizes “the free market that enhances free and fair competition” and “the freedom of developers and users to work with and direct our tools as they see”, within the appropriate safety handles.
All three companies have made detailed recommendations to preserve the American leadership. Antholbropper stressed the importance of “enhancing the American economic competitiveness” and ensuring that “the economic benefits driven by artificial intelligence are widely shared across society.” They called for “securing and increasing US energy supplies” as a prerequisite for maintaining the development of artificial intelligence within the American border, warning that energy restrictions can force developers abroad.
Google has called for decisive measures for “AI Supergrad Us”, focusing on three main areas: investment in artificial intelligence, speeding up the government’s accreditation, and enhancing supportive methods to circumvent the international level. The company stressed the need for “federal, state, local and industrial procedures coordinated on policies such as transmission and reform to meet the increasing energy needs” as well as “balanced export controls” and “continuous financing to research and develop artificial intelligence.”
Google’s introduction in particular highlighted the need for a “federal framework in support of the fusion of Amnesty International” that would prevent a mixture of state regulations while ensuring access to the data available explicitly for training forms. Their approach to “the governance and standards of the centered artificial intelligence, for the sector, and the standards” instead of the wide organization.
Organizational recommendations
A unified federal approach to the regulation of artificial intelligence appeared as a fixed topic in all presentations. Openai warned of “the organizational argument that is created by the individual American states” and proposed “a comprehensive approach that allows the volunteer partnership between the federal government and the private sector.” Their framework is envisioned by the Ministry of Commerce, and perhaps through its re -imagined AI Salama Institute, providing one point of communication to artificial intelligence companies to participate with the government on security risks.
On export controls, Openai called for a gradual framework designed to enhance the dependence of American artificial intelligence in countries that are in line with democratic values while restricting access to China and its allies. The anthropologist similarly called for “the core of export controls to expand AI’s progress in the United States” and “significantly improve the security of American border laboratories” by strengthening cooperation with intelligence agencies.
Publishing and intellectual property rights considerations have been shown prominently in the recommendations of Openai and Google. Openai emphasized the importance of maintaining the principles of fair use to enable artificial intelligence models to learn from copyrights protected without undermining the commercial value of current business. They warned that excessive copyright rules could confirm the American companies compared to Chinese competitors. Google has chanted this opinion, calling for “balanced copyright rules, such as fair use and exclusion of text mining and version” that they described as “decisive to enable artificial intelligence systems to learn from the previous knowledge and data available to the public.”
The three companies emphasized the need to adopt the accelerating government of artificial intelligence technologies. Openai called for a “ambitious government accreditation strategy” to modernize federal processes and spread AI tools safely. Specifically recommend the removal of obstacles that depend on artificial intelligence adoption, including old accreditation operations such as FedramP, restricted test authorities, and unpleasant purchase paths. Antarubor similarly called for “enhancing the purchase of rapid artificial intelligence throughout the federal government” to revolutionize operations and enhance national security.
Google suggested “simplifying old dependence, licensing, and purchase practices” within the government to accelerate the adoption of artificial intelligence. They emphasized the importance of effective public purchasing rules and improving inter -employment in government cloud solutions to facilitate innovation.
Comprehensive requests from the leading AI companies submit this clear message: preserving the American leadership in artificial intelligence requires a federal and coordinated procedure through multiple fronts – from infrastructure development and organizational frameworks to protecting national security and government modernization – especially with the intensification of competition from China.
2025-03-14 20:56:00