AI

Meta beefs up AI security with new Llama tools

If you are building with artificial intelligence, or trying to defend the lowest delicious side of technology, you have just dropped Meta tools new Llama safety tools.

The improved safety tools for Llama AI models reach the side of the new Meta resources designed to help cyber security teams harness the artificial intelligence of defense. All of this is part of their defense to make development and use of artificial intelligence safer for all concerned.

The developers who work with the Llama of Models are now some of the tools that have been promoted to play. You can get these latest Llama protection tools directly from the Meta Llama protection page, or find it where many developers live: Huging Face and Github.

First is Llama Guard 4. Think about it as a developed of the customized safety candidate in Meta for AI. The big news here is that it is now multimedia so that you can understand the safety rules and apply them not only to the text, but also on the images. This is very important because artificial intelligence applications become more visible. This new version is also baked in the new Meta Llama API, which is currently in a limited inspection.

Then there is llamafirewall. This is a new part of the Meta puzzle, designed to serve as a safety control center for artificial intelligence systems. It helps in managing the various safety models that work together and link other protection tools to Meta. Its function? To discover this type of risk that keeps the developers of artificial intelligence at night-things such as the intelligent “fast” injection attacks designed to deceive artificial intelligence, generate an evasive symbol, or a risk fraught behavior of additional components of artificial intelligence.

Meta also gave Llama Express. The main guard model 2 (86 meters) is now better in inhaling these annoying prison attempts and immediate injection. The most interesting thing, perhaps, is to enter the face guards 2 22m.

Presenter 2 22M is a much smaller version, not more. Meta believes it can reduce cumin and calculate costs by up to 75 % compared to the larger model, without sacrificing a lot of detection strength. For anyone who needs faster responses or works on more strict budgets, this is a welcome addition.

But dead not only focuses on artificial intelligence builders; They also look at Internet defenders on the front lines of digital security. They have heard invitations to get better tools than artificial intelligence to help fight electronic attacks, and they share some updates aimed at this.

The Cybersec Eval 4 Benchmark Suite collection has been updated. The Open Source Tools Group helps these institutions know the quality of artificial intelligence systems already in security tasks. This latest version includes a new tool:

  • EVAL CYBERSOC: This framework is designed with the help of cybersecurity experts, and this frame is specifically measures the extent of AI’s performance in a real environment for security operations (SOC). It is designed to give a clearer picture of the effectiveness of artificial intelligence in discovering and responding threats. The standard itself will come soon.
  • Autopatchbench: This standard tests the extent of the quality of Llama and AIS to find safety holes and fix them automatically in software instructions before the bad guys can exploit it.

To help get these types of tools in the hands of those who need them, Meta starts Defenders Llama. This seems to be about granting partner companies and developers to a mixture of artificial intelligence solutions-some open sources, some early access, some of which may tend towards various security challenges.

As part of this, Meta shares the artificial intelligence safety tool that they use internally: the sensitive DOC classification tool. It automatically slapped safety designations on the documents within the institution. Why? To stop sensitive information from getting out of the door, or to prevent it in a manner wrongly in the artificial intelligence system (as in rag sets) where it can be leaked.

They also address the problem of the fake sound created by artificial intelligence, which is increasingly used in fraud. The Llama -created audio detector and the Llama watermark detector are shared with partners to help them discover the sounds created by AI in potential hunting calls or fraud attempts. Companies like Zendesk, Bell Canada and AT & T are already lined up to merge them.

Finally, Meta gave a peek on something likely to be huge for user privacy: special processing. This is a new technology that it works on in WhatsApp. The idea is to allow Amnesty International to make useful things such as summarizing your unprecedented messages or helping you to formulate responses, but without Meta or WhatsApp can read the content of these messages.

Meta is completely open to the security side, even spreading their threat model and calling for security researchers to coro the holes in architecture before walking live. It is a sign that they know that they need to obtain a privacy side properly.

In general, it is a wide range of artificial intelligence security ads from Meta. They are clearly trying to put the dangerous muscles behind securing the artificial intelligence they build, while giving the broader technology community better tools to build safely and defend effectively.

See also: A height warns of the risk in the fraud operations that operate from artificial intelligence: Microsoft reveals $ 4 billion in frustrated fraud.

Do you want to learn more about artificial intelligence and large data from industry leaders? Check AI and Big Data Expo, which is held in Amsterdam, California, and London. The comprehensive event was identified with other leading events including the smart automation conference, Blockx, the digital transformation week, and the Cyber ​​Security & Cloud.

Explore the upcoming web events and seminars with which Techforge works here.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-04-30 13:35:00

Related Articles

Back to top button