• Ben's Bites
  • Posts
  • Daily Digest: Practical learnings from AI

Daily Digest: Practical learnings from AI

PLUS: Looking inside GPT-4, China AI progress

Want to get in front of 100k AI enthusiasts? Work with us here

Hello folks, here’s what we have today;

  1. Extracting Concepts from GPT-4. Remember Golden Gate Claude from Anthropic? Similar research from OpenAI is also looking inside these LLMs. It also focuses on extracting features like price increase or algebra from GPT-4 and GPT-2 small. The effectiveness at different-sized models shows that this research can scale easily.🍿Our Summary (also below)

  2. AI in software engineering at Google. Google's been quietly supercharging its internal software development tools with AI. Think code completion, automated code reviews, build failure prediction – the works.🍿Here’s what it learned (also below)

  3. A recap of what’s going with AI and China:

    • Kling by KWAI is throwing hands with OpenAI’s Sora. It creates 2-minute long videos with impressive consistency.

    • Qwen2 has been released. A powerful family of models ranging from 0.5B parameters to 72B parameters.

    • China’s Nvidia loophole - The Information reports that ByteDance bypasses US sanctions by renting GPUs from Oracle.

    • Llama3V, a project from Stanford students (not Meta) copied MIniCPM (by Tsinghua Universtiy’s AI lab) and tried to pass it off as their own.

from our sponsor

Tired of explaining the same thing over and over again to your colleagues? It’s time to delegate that work to AI.

Simply click capture on our browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover and call to actions.

The best part? Our extension is 100% free. Try it here →

  • Guidde* - Magically create video documentation with AI.

  • NotebookLM by Google goes global with Slides support and better ways to fact-check.

  • Databutton - Let AI build your SaaS product.

  • Falcon - your AI scrum master.

  • Riffo - AI renaming to organize messy files.

  • Sleepy Tales - Have AI read and write personalized bedtime stories.

  • Reagent - Graph-based framework for building AI agents with UI.

View more →


OpenAI is sharing new research to help us better understand these complex models. They've found a way to break down GPT-4's inner workings into millions of interpretable patterns, like identifying individual instruments in an orchestra.

What is going on here?

OpenAI is making progress on "feature extraction," a way to break down the inner workings of large language models like GPT-4 into understandable chunks.

What does this mean?

Understanding how AI models think is a tough nut to crack. It's like trying to figure out why your cat likes that one weird toy - you know it happens, but the why is a mystery. This is especially true for complex models like GPT-4.

Imagine translating the jumbled mess of electrical signals in your brain into understandable thoughts. OpenAI's new method, using something called "sparse autoencoders," does something similar for AI models. It identifies key patterns, or "features," in the model's activity that seem to line up with human-interpretable concepts like "price increases" or "algebraic rings."

The real kicker is the scalability of this approach. OpenAI's method has shown promise in handling models at various scales: smaller ones like GPT-2 small and bigger models like GPT-4 too, potentially paving the way for even deeper insights into the inner workings of tomorrow's AI giants.

Why should I care?

This is a big deal for a couple of reasons. First, it means we're getting closer to understanding how AI models make decisions. This is crucial if we want to trust them with important tasks. Second, it opens up the possibility of fine-tuning these models more effectively. If we know what features are responsible for certain behaviours, we can tweak them to improve performance.


Remember 2019? AI was cool, but most devs didn't see how it would help them. Fast forward to today, and we're ALL using AI-powered tools to write code faster. Google's internal tools are no exception – they're evolving rapidly, and we're seeing some serious productivity gains.

What is going on here?

Google's been quietly supercharging their internal software development tools with AI. Think code completion, automated code reviews, build failure prediction – the works.

What does this mean?

Google started with code completion, then moved on to resolving code review comments and adapting pasted code, with even more ambitious features in the pipeline.

Collectively, there’s a 37% acceptance rate on AI code suggestions, with AI-assisted code making up a whopping 50% of characters written! During their internal builds, Google found that the most impactful tools are the ones that feel natural to use. Features that require extra effort to trigger just don't get used.

Google also found that high-quality data from their engineers' activities is crucial for improving their AI models. And, not surprisingly, they've learned that optimizing the entire process, from identifying opportunities for AI assistance to implementing the suggestions, is key to maximizing impact.

Why should I care?

There’s no stopping AI in software development now. Devs at Google are already using AI to help with code reviews, adapt pasted code, and predict build failures.

Because this is the future of coding, AI-powered tools are becoming indispensable, and the sooner you embrace them, the sooner you can reap the benefits.

Stay ahead of the curve, and let AI handle the heavy lifting so you can focus on what you do best – creating amazing software.

Ben’s Bites Insights

We have 2 databases that are updated daily which you can access by sharing Ben’s Bites using the link below;

  • All 10k+ links we’ve covered, easily filterable (1 referral)

  • 6k+ AI company funding rounds from Jan 2022, including investors, amounts, stage etc (3 referrals)

Join the conversation

or to participate.