• Ben's Bites
  • Posts
  • Meta's preparing a speak peek for Llama 3

Meta's preparing a speak peek for Llama 3

Meta's Llama family of models is up for a 3rd reboot. It’s been cooking up ways to make Llama 3 models larger (up to 140B), less restrictive, and better in performance. While the largest one will take a while, we might see the smaller ones next week.

What’s going on here?

Smaller versions of Meta’s Llama 3 could be released next week.

(edited from Llama 2 posters)

What does that mean?

Open-source models come in sizes based on their parameters. Meta’s Llama models kicked off the push of open-sourcing actually large LLMs with billions of parameters (from 7B to 70B) last year. Now, even 7B parameter models are considered small.

But with Mistral and other companies launching powerful models in this weight class, Llama 2 7B is no longer the frontrunner. Meta wants to change that by giving us a preview of smaller models from the Llama 3 family.

How small these models would be is a secret. Would they follow the Llama patterns of 7B and 13B models? Or will it try to enter the new category of 2B models started by Microsoft’s Phi and Google’s Gemma?

Why should I care?

Open-source models can be run locally on your devices without any internet. The benefits of this approach are speed, privacy and lower cost as well in some cases.

Often they are not good for longer generation tasks. Don’t get me wrong, with how better these models have gotten over the past year, they steal GPT-3.5’s lunch money.

But they are primarily used after fine-tuning to fit specific tasks like making simple API calls, device assistance (like Siri, Alexa) etc.

Join the conversation

or to participate.