- The most recent version of Meta‘s Llama LLM portfolio, the Llama 3 generative AI model, has been released.
- They come in two sizes: 8 billion parameters and 70 billion parameters.
- The Llama 3 collection’s first text-based models were made available today.
The most recent version of Meta’s Llama large language model (LLM) portfolio, the Llama 3 generative AI model, has been released. Llama 3 models are said to be on par with or superior to any existing open generative AI model of similar proportions.
They come in two sizes: 8 billion parameters and 70 billion parameters. Better pretraining and fine-tuning techniques have reduced false rejection rates and enhanced the diversity and alignment of model responses, resulting in a notable improvement in the models compared to Llama 2.
Llama 3
As a result, logic and code production are improved in the models. Meta measured the model’s efficacy using a large collection of training data and a new human evaluation set with 1,800 prompts spread across 12 use cases.
As it continues to lead the way in the appropriate usage and deployment of LLMs, Meta aspires to provide the greatest open models that are comparable to the finest proprietary models now on the market. The Llama 3 collection’s first text-based models were made available today.
Plans call for Llama 3 to be multilingual and multimodal, have a longer context, and continue to enhance performance across key LLM functions like reasoning and coding.
Better “steerability,” fewer refusals to respond to inquiries, and increased precision on trivia, history, STEM, and coding activities are all provided by Llama 3 models. It is driven by an enormous 15 trillion token training dataset that spans over 30 languages, which is seven times larger than Llama 2’s.