AI

Why the world is looking to ditch US AI models

As a result, some policymakers and business leaders in Europe, in particular, are in particular looking at their dependence on the US-based technology and asking whether they can rotate better alternatives. This is especially true for Amnesty International.

One of the clearest examples of this on social media. Yasmine Kurzi, a professor of Brazilian law who is looking for a local technology policy, and put it in this way: “Since the second administration of Trump, we cannot rely on [American social media platforms] To do to the naked minimum anymore. ”

The systems of moderate social media content-which already uses automation and also experience spreading large language models for the brand of problematic-failed to discover gender-based violence in various places such as India, South Africa and Brazil. Marilyna Wesnak, a human rights lawyer who focuses on artificial intelligence governance at the European Center, says to a non -profit law, if the platforms begin to rely more on LLMS for moderate content, then this problem will get worse. “LLMS was badly supervised, then moderate LLMS is also used badly to reduce the other content,” as she told me. “It is very circular, and the mistakes continue to repeat and amplify.”

Part of the problem is that the systems are mainly trained in data from the English -speaking world (and American English in it), and as a result, they do well with local languages ​​and context.

Even multi -language language models, which aim to treat multiple languages ​​at one time, still lead badly with non -Western languages. For example, a single evaluation of the ChatGPT response to query health care found that the results were much worse in the Chinese and Indian, which are well represented in the data groups in North America, compared to the English and Spanish language.

For many in rights rights, this verifies the validity of their calls for more methods that depend on society towards artificial intelligence-inside and outside the context of social media. These can include small language models, chat, and data groups designed for certain uses, especially certain languages ​​and cultural contexts. These systems can be trained to identify the uses of the public deity and expression, explain the words or phrases written in a mixture of languages ​​and even alphabets, and to determine the “reclaiming language” (stumbling at some point the target group decided to embrace it). All of these tend to miss or miss it by language models and automatic systems trained mainly on the English English. For example, the founder of Startup Shhor Ai hosted a committee in rights rights and talked about the new application programming interface that focuses on Indian colloquial languages.

Several similar solutions have been developed for years-we have covered a number of them, including efforts leading to volunteers that are from Mozilla to collect training data in languages ​​other than the English language, and promising startups such as Llapa Ai, which builds Amnesty International for African Languages. Earlier this year, we included small language models on the 2025 list of the 10 best penetration techniques.

However, this moment feels a little different. It is clear that the second Trump administration, which constitutes the actions and policies of American technology companies, is a major factor. But there are others playing.

2025-03-25 09:00:00

Related Articles

Back to top button