Daily Digest: Abandoned models

PLUS: A shift in culture and public driven AI

Sign up | Advertise | Ben’s Bites News
Daily Digest #265

Hello folks, here’s what we have today;

PICKS
  1. Arrakis is deserted. OpenAI recently scrapped a new AI model called Arrakis after it failed to meet expectations. Arrakis was meant to allow OpenAI to run its AI systems more efficiently and cheaply.🍿Our Summary (also below)

  2. What if normal people define the rules for AI? Anthropic and the Collective Intelligence Project ran a public input process to create an AI constitution. They discovered areas of agreement and disagreement with their in-house constitution.🍿Our Summary (also below)

  3. A Bard song for Google's slow culture - Google has adopted a "wartime" mentality to expedite Bard, its ChatGPT competitor. Teams that work on Bard are moving much faster than other Google teams.🍿Our Summary (also below)

TOP TOOLS
  • Morph Prover v0 7B by Morph Labs - The first open-source model trained as a conversational assistant for Lean users.

  • Calendly to launch AI-powered scheduling and automation features soon.

  • Memora - A vector DB with multistage reranking.

  • Riffusion - Create music based on your images.

  • Marauder - Recall your memories by tracking movement history.

  • LastMile AI - AI developer platform for engineering teams.

WHO’S HIRING IN AI
NEWS
QUICK BITES

OpenAI recently scrapped a new AI model called Arrakis after it failed to meet expectations. Arrakis was meant to allow OpenAI to run its AI systems more efficiently and cheaply.

What is going on here?

OpenAI halted work on a model codenamed Arrakis this spring after realizing the model underperformed and did not deliver the expected cost savings.

What does this mean?

While not immediately impacting OpenAI's business, it may slow future progress as engineers shift focus. Microsoft, an OpenAI partner, was also hoping to use Arrakis to lower costs, so this delays their integration plans. Open AI researchers have since pivoted to making GPT-4 faster with hopes of a model termed GPT-4 Turbo (which could’ve been Arrakis’ final name, if continued).

Arrakis was meant to use a technique called sparsity. Google’s Jeff Dean has also referred to sparsity as an important trend looking ahead.

Why should I care?

The Arrakis news highlights the complexity of pushing AI forward. There are no guarantees, even for the most capable companies. The delay caused by the Arrakis project could mean that other companies like Google and Anthropic might catch up to Open AI soon.

QUICK BITES

Google is urgently working on its AI chatbot Bard to compete with ChatGPT. This is shaking up Google's historically slow culture.

What is going on here?

Google has adopted a "wartime" mentality to expedite Bard, its ChatGPT competitor. Teams that work on Bard are moving much faster than other Google teams.

What does this mean?

The Bard team shortcuts legal reviews and executes projects in days or weeks rather than months. This startles some employees but energizes others seeking to make their mark on cutting-edge AI. Bard's leader Sissie Hsiao conveys urgency, asking about work-life balance and suggesting those concerned should transfer off her team.

Hsiao runs Bard like a lean startup using "pods" - small groups working in parallel on different features. She and other execs closely monitor pods in regular reviews. Hsiao is a heavyweight at Google known for monetizing mobile apps. So she likely aims to eventually bring profit from Bard.

Why should I care?

Google is laser-focused on not falling behind in AI. This intense push suggests chatbots like Bard are viewed as existentially important to Google's future. The company is uncharacteristically willing to ruffle feathers to build cutting-edge AI quickly.

Google is marshalling immense resources behind Bard. With its best talent on the case, Bard could evolve from joke to juggernaut practically overnight. And let’s not forget it’s still free.

QUICK BITES

Anthropic and the Collective Intelligence Project ran a public input process to create an AI constitution. They discovered areas of agreement and disagreement with their in-house constitution.

What is going on here?

The public input resulted in a moderately different constitution for AI than Anthropic's internal one.

What does this mean?

There was about 50% overlap between the public and Anthropic constitutions. The public constitution focused more on promoting desired behaviour rather than avoiding undesired behaviour. Some public statements were excluded due to a lack of consensus or being problematic.

Training a model on the public-sourced constitution versus Anthropic's own reduced certain biases, and had similar political opinions without any loss in performance.

Why should I care?

This work on sourcing the rules from the public is great step in building trust around AI models and what they generate. More people getting a sense of belonging and control over the models can increase the adoption as well. At the same time, this still also involves many judgment calls from Anthropic as in participant selection, platform, seed statements, moderation, and more.

Ben’s Bites Insights

We have 2 databases that are updated daily which you can access by sharing Ben’s Bites using the link below;

  • All 10k+ links we’ve covered, easily filterable (1 referral)

  • 6k+ AI company funding rounds from Jan 2022, including investors, amounts, stage etc (3 referrals)

Reply

or to participate.