Meta announced the launch of its groundbreaking Llama 4 family with the introduction of two cutting-edge multimodal models: Llama 4 Scout and Llama 4 Maverick.
These models represent Meta’s most advanced AI to date, setting new benchmarks for performance and capabilities within their class.
Llama 4 Scout
This powerful 17 billion active-parameter model leverages a sophisticated architecture featuring 16 experts. Key highlights include:
Industry-Leading Context Window: An expansive 10 million token context window allows for processing significantly longer and more complex information.
Superior Performance: Llama 4 Scout demonstrably outperforms leading models such as Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across a wide array of established industry benchmarks.
Llama 4 Maverick
Also featuring 17 billion active parameters, Llama 4 Maverick incorporates an even larger network of 128 experts and boasts exceptional capabilities:
Best-in-Class Image Grounding: Achieves unparalleled accuracy in understanding and relating visual elements within context.
Benchmark-Surpassing Results: Llama 4 Maverick outperforms both GPT-4o and Gemini 2.0 Flash across a comprehensive suite of widely recognized benchmarks.

Competitive Reasoning and Coding: Delivers comparable performance to the significantly larger DeepSeek v3 in reasoning and coding tasks, while utilizing only half the active parameters.
Unmatched Performance-to-Cost Ratio: Its chat version achieves an impressive ELO score of 1417 on the LMArena platform, highlighting its exceptional efficiency.
Meta said the remarkable capabilities of Llama 4 Scout and Maverick are a direct result of distillation from Llama 4 Behemoth, Meta’s most powerful model currently in development.
Even in its ongoing training phase, Llama 4 Behemoth is already demonstrating performance that surpasses GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on demanding STEM-focused benchmarks. Meta is eager to share further updates on Llama 4 Behemoth as its development progresses.
READ MORE AI NEWS.