- Ben's Bites
- Posts
- Daily Digest: How LLMs think
Daily Digest: How LLMs think
PLUS: Copilot Agents and monster raises.
Subscribe | Ben’s Bites Pro | Ben’s Bites News
Daily Digest #417
Want to get in front of 100k AI enthusiasts? Work with us here
Hello folks, here’s what we have today;
PICKS
New tutorial: Generate website analysis reports from URLs using AI.
Mapping the mind of a LLM. You know how AI models are often seen as a black box? Well, Anthropic does this crazy thing of looking inside them to understand what makes them tick. They've extracted millions of features from Claude 3.0 Sonnet, like gender bias, bridges and code errors. 🍿Our Summary (also below)
Microsoft is expanding Copilot to teams and agents. Copilot can soon act as a team member in meetings and chats. It can also manage projects and track deadlines. Microsoft also gave a sneak peek into what agents that do tasks autonomously will look like in their ecosystem.
It was a funding frenzy yesterday with huge rounds getting announced. Some of the larger ones:
Scale AI has raised $1B series F at a $13.8B valuation.
Suno has raised $125M to build a future where anyone can make music
French AI startup H raises $220M coming out of stealth.
ChatGPT’s Connected Apps feature has started to roll out to more users. Same with Google’s “Gemini in Workspace” features. Check if you’ve got them.
from our sponsor
Build Your AI Customer Support Stack
AI is here to stay, so make it work for you. Help Scout’s ebook, Building your AI Support Stack, defines what AI can do for customer service, weighs the pros and cons of integrating AI, runs through the latest AI-powered support tools, and provides expert tips for a thoughtful implementation plan.
TOP TOOLS
Interactive Knowledge Cards are coming to Perplexity, thanks to its partnership with Tako.
Experts GPT - Create a group chat with expert personalities to discuss any topic.
Tone - The pendant that pays attention, so you never forget again.
Github Coilpot Extensions - Connect Github with your preferred tools and services.
Neolocus - AI interior designer for your house.
Octoverse - Build accurate, fast & affordable AI agents in your app.
Timmy - Personalized spending suggestions to grow your wealth.
Welcome Compass - Digital welcome guides for short-term rentals.
LemonSpeak - Turn your podcast into helpful assets.
NEWS
Self-driving vehicles will be on roads by 2026 in the UK.
An overview of OpenAI’s safety practices.
IBM brings updates to WatsonX with more open-source models and assistants.
Microsft and HuggingFace extend their partnership for open models and Azure compute.
Khan Academy makes Khanmigo free for US teachers and partners with Microsoft.
Cognition Labs is partnering with Microsoft to scale Devin with Azure.
Nvidia’s rivals take aim at its software dominance.
Google Search’s new AI overviews will soon have ads.
The a16z American Dynamism team is starting an Engineering Fellows program.
Wearable AI startup Humane explores potential sale.
QUICK BITES
You know how AI models are often seen as a black box? Well, Anthropic, the folks behind Claude, have made some pretty cool progress in understanding the inner workings of these models.
What's going on here?
They've basically created a conceptual map of Claude's "brain," identifying how it represents millions of different concepts, from the Golden Gate Bridge to gender bias.
What does this mean?
It's like they've peeked under the hood of a car and figured out how the engine works. This isn't just about knowing how Claude identifies "San Francisco" or "immunology," it's about understanding how it connects more abstract ideas like "bugs in code" or "keeping secrets."
It's wild. Anthropic found features for everything from "Golden Gate Bridge" to "gender bias" to "keeping secrets". They can even manipulate these concepts to see how they change Claude's behaviour. Likeamplifying the Golden Gate Bridge feature, they can make the model believe its physical form is the bridge.
Why should you care?
For starters, this is a HUGE step in AI safety. By understanding how AI models think, we can potentially make them less biased, less likely to be tricked into harmful behaviour, and more aligned with human values.
It's not just about safety though. This discovery also sheds light on how AI models understand and use language, which could lead to even more powerful and sophisticated AI systems in the future.
Who knows what we'll be able to do once we fully understand how these models tick?
Ben’s Bites Insights
We have 2 databases that are updated daily which you can access by sharing Ben’s Bites using the link below;
All 10k+ links we’ve covered, easily filterable (1 referral)
6k+ AI company funding rounds from Jan 2022, including investors, amounts, stage etc (3 referrals)
Reply