Meta Unveils ‘Purple Llama’ Aimed at Safe GenAI Development

US tech giant Meta Platforms Inc. has introduced Purple Llama, an umbrella project for creating secure artificial intelligence (AI) systems using generative AI (genAI) tools.

The Facebook parent said the toolkit was built to address AI challenges that developers are unable to tackle on their own, offering them equal opportunity and allowing the company to build a ‘center of mass for open trust and safety.’

Meta will join forces with AI application makers, including US cloud computing service providers Amazon Web Services (AWS) Inc. and Google Cloud Platform for its new initiative.

Major chip manufacturers Intel Corp., Advanced Micro Devices (AMD) Inc., and Nvidia Corp. are also taking part in assessing the capabilities and safety of AI models.

The California-based firm started Purple Llama with the reveal of CyberSec Eval, its free and open suite of cybersecurity testing benchmarks for large language models (LLMs). The package will help developers gauge the model’s likelihood of producing insecure code or aid in conducting cyberattacks.

Meta is also announcing Llama Guard, which will serve as a text safety classifier. The LLM is designed to identify language that could be inappropriate, harmful, or indicate illegal activities.

The company plans to gradually release all the open-source tools and evaluations the AI development community needs to create genAI safely and responsibly.

Meta’s Purple Teaming Approach to GenAI Security

Purple Llama, according to Meta, was designed as a two-pronged strategy for security and safety, examining an AI’s inputs and outputs.

The company stated that to actually minimize the risks of genAI, developers would need the attack through ‘red teaming’ and defense in the form of ‘blue teaming.’

In conventional cybersecurity, red teaming involves developers or internal testers deliberately executing different attacks on an AI to check for possible errors, flaws, or unexpected outputs or interactions.

On the other hand, blue teaming is a tactic wherein experts focus on responding or averting such attacks to identify methods for fighting off the real threats in AI models.

Therefore, Meta’s purple teaming strategy comprising the red and blue teaming is aimed at creating a joint approach to assessing and curbing the potential risk of such technology.

Sending
User Review
0 (0 votes)

RELATED POSTS

Leave a Reply