Alibaba’s AI coding tool raises security concerns in the West

Alibaba has released a new model of artificial intelligence coding called QWEN3-Coder, designed to deal with complex software tasks using a large open source model. The tool is part of the QWEN3 family from Alibaba and is promoted as the most advanced coding agent in the company so far.
The model uses a mixture of expert approach (MEE), stimulating 35 billion teachers out of a total of 480 billion and supporting up to 256,000 symbols of context. It is said that this number can extend to a million using special extrapolation techniques. The company claims that QWEN3-Code outperformed other open models in agents, including versions of Moonshot Ai and Deepseek.
But not everyone sees that this is good news. Jurgita lapienyė, chief editor in Cybernews, warns that QWEN3-Komeer may be more than just a useful coding assistant-it can pose a real danger to global technology systems if Western developers adopt widely.
Trojan horse in open source clothes?
Alibaba’s messages on QWEN3-Conder focused on their technical power, comparing them with tools with a higher level of Openai and anthropology. But although the standard degrees and features have caught attention, Lapienyė indicates that they may pay attention to the real issue: safety.
It is not that China is attached to artificial intelligence – this is already known. The deepest anxiety revolves around the hidden risks of using programs created by artificial intelligence systems that are difficult to examine or understand completely.
In the words of lapienyė, developers can be a “sleep in the future” where basic systems are built unknown with a weak symbol. Tools such as QWen3-Recore may make life easier, but they can also provide hidden weaknesses that pass without anyone noticing.
This danger is not hypothetical. Cybernews researchers recently reviewed the use of artificial intelligence through major American companies and found that 327 S&P 500 public reports using artificial intelligence tools. In those companies alone, the researchers have identified approximately 1000 of the prosecution’s weaknesses.
Another model of artificial intelligence – especially a model developed under the Chinese strict national security laws – can add another layer of risks, which is difficult to control.
When the code becomes the back door
Today’s developers are greatly tended to artificial intelligence tools to write code, fix errors, and form how to build applications. These systems are fast and useful and improve every day.
But what if these same systems are trained to inject defects? Not clear, but the small problems that are difficult to start that will not lead to a warning. The weakness that appears to be the unpopular design decision may not be discovered for years.
This is how the supply chain attacks begin. Previous examples, such as the Solarwinds incident, show what a long -term infiltration can be performed quietly and patiently. With enough access and context, the artificial intelligence model can learn how to grow similar problems – especially if he has millions of code.
It is not just a theory. Under the National Intelligence law in China, companies such as alibaba must cooperate with government requests, including those that involve data and artificial intelligence models. This converts the conversation from technical performance to national security.
What happens to your symbol?
Another major issue is exposure to data. When developers use tools such as QWen3-Recore to write or correct a symbol, each part of this interaction can reveal sensitive information.
This may include ownership algorithms, security logic, or infrastructure design – completely the type of details that can be useful for a foreign country.
Although the model is open source, there is still a lot that users cannot see. The rear infrastructure, remote measurement systems and methods of tracking use may not be transparent. This makes it difficult to know where the data goes or what the model may remember over time.
Self -judgment without supervision
Alibaba also focused on AI’s client – the model that can act more independently than standard assistants. These tools not only suggest code lines. They can be set full tasks, work with minimal inputs, and make decisions on their own.
This may seem effective, but it also raises red flags. It may become a fully independent coding factor that can wipe the entire code rules and make dangerous changes in the wrong hands.
Imagine an agent who can understand the defenses of the company’s system and attacks designed to exploit them. The same set of skills that helps developers to move faster can be reused by attackers to move faster.
The organization is still not ready
Despite these risks, current regulations do not take tools like QWEN3-Coder in a useful way. The United States government has spent years discussing the private data concerns associated with applications such as Tiktok, but there is little general supervision of foreign artificial intelligence tools.
Groups such as the Foreign Investment Committee in the company’s acquisitions in the United States (CFIUS), but there is no similar process to review artificial intelligence models that may pose national security risk.
President Biden’s CEO focuses on artificial intelligence mainly on local models and public safety practices. But it leaves concerns about the imported tools that can be included in sensitive environments such as health, financial or national infrastructure.
Artificial intelligence tools that are able to write or change software instructions should be dealt with the same risk of software supply chain threats. This means determining clear guidelines for the place and how it can be used.
What should happen next?
To reduce risks, institutions that deal with sensitive systems should stop before merge QWEN3-Komeer-or any AI AGENIC AGENIC-Devenced-in their workflow. If you are not inviting someone you do not trust to look at your source code, why do you let artificial intelligence rewrite it?
Security tools also need to catch a knee. The fixed analysis program may not discover complex background devices or hidden logical problems formulated by artificial intelligence. The industry needs new tools designed specifically for the brand and testing the code that is created from the artificial intelligence of suspicious patterns.
Finally, developers, technology leaders and organizers must understand that artificial intelligence born in the code is not neutral. These systems have strength – as useful tools and possible threats. The same features that make it useful can make it dangerous.
Lapienyė QWEN3 called rejecting a “potential Trojan horse”, and borrowing fits. It is not only about productivity. It relates to those inside the gates.
Not everyone agrees on what matters
Wang Jian, founder of Alibaba Cloud, sees things differently. In an interview with BloombergHe said that innovation is not related to employing the most expensive talent, but about choosing people who can build an unknown. He criticized the Silicon Valley approach to the use of artificial intelligence, as technology giants are now competing for senior researchers, such as sports teams that provide bids for athletes.
“The only thing you have to do is get the right person,” Wang said. “Not really the person is expensive.”
It is also believed that the Chinese artificial intelligence race is healthy, not hostile. According to Wang, companies alternate forward, helping the entire ecosystem to grow faster.
“You can get a very rapid repetition of technology because of this competition,” he said. “I don’t think it’s brutal, but I think it’s very healthy.”
However, open source open competition does not guarantee confidence. Western developers need to think carefully about the tools they use – and those who built them.
The bottom line
QWEN3-Komeer may provide great performance and open access, but its use comes with risks that go beyond standards and coding speed. At a time when artificial intelligence tools formed the importance of systems, it is not worth the question about what these tools can only do – but who benefits from that.
(Photography of testimonies of Rahman)
See also: The new QWEN AI is preparing Open Source records
Do you want to learn more about artificial intelligence and large data from industry leaders? Check AI and Big Data Expo, which is held in Amsterdam, California, and London. The comprehensive event was identified with other leading events including the smart automation conference, Blockx, the digital transformation week, and the Cyber Security & Cloud.
Explore the upcoming web events and seminars with which Techforge works here.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-07-30 10:00:00