- Ben's Bites
- Posts
- Daily Digest: Llama 3 sneak peek
Daily Digest: Llama 3 sneak peek
PLUS: Apple's AI ferrets
Subscribe | Ben’s Bites Pro | Ben’s Bites News
Daily Digest #386
Want to get in front of 100k AI enthusiasts? Work with us here
Hello folks, here’s what we have today;
PICKS
Meta is likely to launch 2 small-sized Llama-3 models next week. Meta's Llama family of models is up for a 3rd reboot. While the largest one (up to 140B parameters) will take a while, we might see the smaller ones next week.🍿Our Summary (also below)
Google adds Gemini Pro to Android Studio. You can ask questions about development or use code completions. There’s a handy template as well to get started.
Ferret UI by Apple - Making multimodal models understand user interfaces. Apple’s research team has been on a roll recently. They have been actively releasing new papers around allowing LLMs to access and understand what’s on your screen. Please, give us a Siri that works this WWDC.
from our sponsor
OctoAI just launched OctoStack, the GenAI serving stack you can run securely in your environment. Join their technical deep dive on April 17th and learn how to run super-efficient inference to maximize your GPUs, customize any model you choose, and achieve reliability at scale. Register for the live virtual event here.
TOP TOOLS
Morph - Open-source answer engine with generative UI.
UI Bakery - Generate web apps over your data with AI.
Ideatum - Create colour schemes and font pairings for your design ideas.
Package Design - Turn ideas into designs for your product’s packaging.
Private LLM - Local models made fast for Apple silicon.
Instashorts - Tiktok stories but with your voice.
llm.c - LLM training in simple, raw C/CUDA.
Melodisco - Discover and play AI music.
NEWS
OpenAI made Sam Altman famous, his investments made him a billionaire.
The email assistant I’ve been waiting for—with Andrew Lee of Shortwave.
12 scaling laws for LLM knowledge capacity by Meta.
Microsoft is confident Arm-based Windows devices could finally beat Apple Macbooks in AI performance.
Building an AI coach to help tame my monkey mind.
South Korea will invest $7B in AI by 2027 and an additional $1B into AI chip companies.
QUICK BITES
Meta's Llama family of models is up for a 3rd reboot. It’s been cooking up ways to make Llama 3 models larger (up to 140B), less restrictive, and better in performance. While the largest one will take a while, we might see the smaller ones next week.
What is going on here?
Smaller versions of Meta’s Llama 3 could be released next week.
(edited from Llama 2 posters)
What does this mean?
Open-source models come in sizes based on their parameters. Meta’s Llama models kicked off the push of open-sourcing actually large LLMs with billions of parameters (from 7B to 70B) last year. Now, even 7B parameter models are considered small.
But with Mistral and other companies launching powerful models in this weight class, Llama 2 7B is no longer the frontrunner. Meta wants to change that by giving us a preview of smaller models from the Llama 3 family.
How small these models would be is a secret. Would they follow the Llama patterns of 7B and 13B models? Or will it try to enter the new category of 2B models started by Microsoft’s Phi and Google’s Gemma?
Why should I care?
Open-source models can be run locally on your devices without any internet. The benefits of this approach are speed, privacy and lower cost as well in some cases.
Often they are not good for longer generation tasks. Don’t get me wrong, with how better these models have gotten over the past year, they steal GPT-3.5’s lunch money.
But they are primarily used after fine-tuning to fit specific tasks like making simple API calls, device assistance (like Siri, Alexa) etc.
Ben’s Bites Insights
We have 2 databases that are updated daily which you can access by sharing Ben’s Bites using the link below;
All 10k+ links we’ve covered, easily filterable (1 referral)
6k+ AI company funding rounds from Jan 2022, including investors, amounts, stage etc (3 referrals)
Reply