- Ben's Bites
- Posts
- Daily Digest: Risky inputs
Daily Digest: Risky inputs
PLUS: code repair and musicians against AI
Subscribe | Ben’s Bites Pro | Ben’s Bites News
Daily Digest #382
Want to get in front of 100k AI enthusiasts? Work with us here
Hello folks, here’s what we have today;
PICKS
Replit announces code repair and team collaboration. Replit, the online IDE just had its annual developer day yesterday. CEO Amjad Masad promised to break benchmarks leading up to this and Replit surely did with their new model called Code Repair. More features like Replit Teams are launching soon.🍿Our Summary (also below)
The input text window of LLMs is increasing every day, but that also means new risks pop up. This new paper from Anthropic talks about a new approach called "many-shot jailbreaking". It's a way to get around the safety features built into these models by adding many harmful examples in your input.🍿Our Summary (also below)
The 2024 MAD landscape - 2000 companies that make the machine learning, artificial intelligence and data ecosystem today. One of the most exhaustive resources on companies in the ML and AI space.
from our sponsor
Watch this lively discussion amongst veteran insiders and learn how AI Natives and sophisticated enterprises are building their GenAI stacks in their private environments. See what you can do to achieve efficiency, customization, and reliability. Learn more about Future-Proofing your GenAI Stack
TOP TOOLS
BrowserBase - A programmable web browser every AI application needs.
Astra - An executive assistant in your inbox.
Keywords AI - Unified DevOps platform to build AI applications.
UnderMind - Search for incredibly complex topics and find every paper.
Creo - Build internal tools with AI.
Co-Manager by Venice - Your music career assistant.
Jessica by Queue - AI assistant to help you create thought leadership content.
NEWS
Yahoo is buying Artifact, the AI news app from the Instagram co-founders.
200+ musicians call AI developers to respect artists’ rights.
The future of publishing by Interintellect - Exploring how authors are experimenting with new formats and tools (including AI).
Should I be using AI right now? 10 practical tips from Professor Ethan Mollick
SWE-Agent - A new agent framework from Princeton researchers that scores 12.29% on SWE-bench (right behind Devin’s 13.84%).
A talk with Sony Music’s boss Rob Stringer - “We want the artists to be paid.”
Amazon offers free credits for startups to use AI models including Anthropic.
A practical introduction to AI for developers.
Levelling up Workers AI - CloudFlare makes its AI generally available.
Funds found:
Hailo raises $120M to design more efficient AI chips.
Read AI raises $21MM and reveals intelligent summaries for meetings and messages.
Luminance raises $40M series B to use generative AI for law.
Modal announces $25M series A to upskill employees in AI.
HD raises $5.6M to build healthcare chatbots for South East Asia.
QUICK BITES
Replit, the online IDE just had its annual developer day yesterday. CEO Amjad Masad promised to break benchmarks leading up to this and Replit surely did with their new feature called Code Repair. More features like Replit Teams are launching soon.
What is going on here?
Replit announced AI-powered code repair in its online IDE.
Replit’s Code Repair 7B beats way larger models like GPT-4 and Claude 3 Opus.
What does this mean?
Replit’s hypothesis is AI should be its own entity in a development environment i.e. it should work alongside you instead of just suggesting the next steps or changes to the code.
Replit’s first step towards this vision is a programming LLM that is native to its platform and can understand session activities to fix mistakes in your code, just like a pair programmer would.
Technically, Replit’s Code Repair is based on a 7B model but beats GPT-4 and Claude 3 Opus on Replit’s code-repair benchmarks. Since it’s just a 7B parameter model, Code Repair is gonna be insanely fast and Replit can serve this feature at a low cost.
You can read the complete technical report on base model choices, adapting it to Replit’s interface and new evals for SOTA testing.
Why should I care?
Replit has adopted AI into its core and now it’s building AI models that are unique to its platform. Going this route, it’s trying to be a platform that serves beyond just engineers, it wants to attract builders of all kinds.
Along with AI working with you, Replit is expanding its collaborative nature to allow Google Docs-like live collaboration. With upcoming Replit Teams, multiple members of your team could jump into a reply (i.e. a project) and edit files simultaneously. Pop in a banger: you get an additional AI teammate that has RAG access to your codebase and works in sync with you all.
QUICK BITES
We all know large language models (LLMs) are getting better, but that also means new risks pop up. This new paper from Anthropic talks about a new approach called "many-shot jailbreaking". It's a way to get around the safety features built into these models.
What is going on here?
Multiple harmful examples can get AI models to bypass their safety filters.
What does this mean?
These AI models are getting way better at understanding longer input text, (aka long-context window). But with that, new loopholes emerge. In many-shot jailbreaking, think of it like distracting a security guard by yapping too much—that's what can be done with these long-context AI bots.
Hackers or bad actors could potentially use this trick to get AI to say harmful things. The basic idea is: flood the AI with tons of examples of dangerous or inappropriate responses, and it increases the chance the AI will follow that pattern when you ask it something similar. The bigger the AI model’s input window (i.e. room for more examples), the more likely it is to work.
Why should I care?
AI is about building reliable tools. Imagine your fancy new self-driving car getting confused by a carefully designed billboard (it's happened before). This is the same idea but with powerful AI chatbots instead.
The bigger picture: The better the context window in these models gets, the more loopholes to teach shady stuff to these models (even if the model builder didn’t want it). If we don't handle things proactively, someone who wants to use AI maliciously could find a way to exploit it on a bigger scale.
Ben’s Bites Insights
We have 2 databases that are updated daily which you can access by sharing Ben’s Bites using the link below;
All 10k+ links we’ve covered, easily filterable (1 referral)
6k+ AI company funding rounds from Jan 2022, including investors, amounts, stage etc (3 referrals)
Reply