AI

Meta will train AI models using EU user data

Meta has confirmed plans to use the content shared by its adult users in the European Union (European Union) to train artificial intelligence models.

This announcement follows the last launch of the features of Meta Ai in Europe and aims to enhance the capabilities and importance of culture for its artificial intelligence systems of the diverse population in the region.

In a statement, Meta wrote: “Today, we announce our plans to train artificial intelligence in Meta using general content – such as public posts and comments – which adults share our products in the European Union.

“People’s reactions will also be used with Meta AI – like questions and inquiries – to train and improve our models.”

Starting this week, Meta users (including Facebook, Instagram, WhatsApp and Messenger) will receive within the European Union notifications explaining the use of data. These notifications, which are delivered within the application and via e -mail, will separate the types of general data concerned and link the objection form.

“This objection model has made it easy to find, read and use, and we will respect all the objection models that we have already received, as well as the newly introduced models,” Meta explained.

Meta explicitly made it clear that some types of data are still outside the border for the purposes of artificial intelligence training.

The company says it will not “use private people’s messages with friends and family” to train artificial intelligence models. Moreover, general data related to accounts that belong to users under the age of 18 in the European Union in training data sets will not be included.

Meta wants to create AI tools designed for European Union users

Meta places this initiative as a necessary step towards creating AI tools designed for European Union users. Meta launched AI Chatbot functions through its correspondence applications in Europe last month, and the use of this data is framing as the next stage in improving the service.

“We believe that we have the responsibility to build artificial intelligence not only for Europeans, but it has already been built for them,” the company explained.

“This means everything from dialects and colloquial, to excessive local knowledge and the distinct methods used by different humor and ridicule of our products.”

This becomes increasingly close to the development of artificial intelligence models with multimedia capabilities that extend to the text, sound, video and images.

Meta is also located in its European Union actions within the broader industry scene, noting that artificial intelligence training in user data is a common practice.

“It is important to note that the type of artificial intelligence that we do is not unique to the dead, and it will not be unique in Europe.”

“We follow the example that others have set including Google and Openai, both of which have already used data from European users to train artificial intelligence models.”

Meta also claimed that her approach goes beyond others in openness, saying: “We are proud that our approach is more transparent than many of our counterparts in this field.”

Regarding organizational compliance, Meta referred to the previous participation with the organizers, including the delay that started last year while awaiting clarification on legal requirements. The company also cited positive opinion from the European Data Protection Council (EDPB) in December 2024.

“We welcome the opinion of EDPB in December, which confirmed that our original approach has fulfilled our legal obligations,” Meta wrote.

Wide concerns about artificial intelligence training data

While Meta offers its approach to the European Union as transparent and compatible, the practice of using vast areas of public user data from social media platforms to train large language models (LLMS) and artificial developed intelligence raises great concerns between privacy advocates.

First, the definition of “general” data can be controversial. The common content may not publicly publish on platforms like Facebook or Instagram with expectation that it will become raw materials for trading AI’s commercial systems capable of generating completely new content or visions. Users may share personal tales, opinions or creative works in public within their perceived society, without perceived its widespread analysis, automation and re -appointment by the owner of the platform.

Second, the effectiveness and integrity of the “cancellation of the” OPT-in “system remains a discussion. The status of responsibility for users to objectively object, often after receiving the buried notifications between countless others, raises questions about enlightened approval. Many users may not see, understand or behave based on the notice, which may lead to the use of their data by default instead of explicit permission.

Third, the issue of bias inherent in the horizon is looming. Social media platforms and amplifying societal biases, including racism, sexual discrimination and misinformation. Artificial intelligence models have been trained to learn risks, repetition, and even limit these biases. While companies use filtering and control technologies, eliminating the bias absorbed from billions of data points is a tremendous challenge. Amnesty International, trained in European public data, needs an accurate organization to avoid sustaining stereotypes or harmful generalizations around the same cultures that aims to understand.

Moreover, questions surrounding copyright and intellectual property continue. Public publications often contain text, photos and original videos created by users. The use of this content to train artificial intelligence models, which may then generate competitive content or extraction of value from it, enters a mysterious legal area regarding ownership and fair compensation – cases that are currently being challenged in courts all over the world that include various artificial intelligence developers.

Finally, while Meta highlights its transparency for competitors, the actual mechanisms for selecting data, filtering, and their specific impact on the behavior of the model often remains transparent. Transparency with meaning may really include deeper visions on how the specific data affects the outputs of artificial intelligence and in force to prevent misuse or unintended consequences.

The approach that Meta in the European Union emphasizes that the tremendous value technology giants on the content created by users as fuel for the prosperous artificial intelligence economy. When these practices become more prevalent, the surrounding discussion is the privacy of data, informed approval, algorithm, and moral responsibilities of artificial intelligence developers will undoubtedly intensify all over Europe and beyond.

(Photo by Julio Lopez)

See also: Apple AI stressed the privacy of artificial and unknown data

Do you want to learn more about artificial intelligence and large data from industry leaders? Check AI and Big Data Expo, which is held in Amsterdam, California, and London. The comprehensive event was identified with other leading events including the smart automation conference, Blockx, the digital transformation week, and the Cyber ​​Security & Cloud.

Explore the upcoming web events and seminars with which Techforge works here.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-04-15 16:32:00

Related Articles

Back to top button