- Ben's Bites
- Posts
- Interviewing the CEO of Scale
Interviewing the CEO of Scale
We recently sat down with AI infrastructure wunderkind, Alexandr Wang, CEO and Founder of Scale AI, to discuss the current state of the industry and what is on the horizon
Happy New Year! (what date do we all agree to not have to say that via email? I vote this week is ok, next week - drop it).
Today's email is a bit of a test. I interviewed Alexandr Wang, founder and CEO of Scale last year (ha). I wanted to experiment with some longer content (don't panic, the usual curated links are below). Let me know what you think and if I should do more like this! Maybe twice a month or something.
Let's get to it.
🤌 Ben's Picks
Scaling AI with Alexandr Wang
Breakthroughs in AI this year brought us eye popping images, advances in science, control of robots, and the firing of a google employee who claimed the company’s chatbot generator possessed a soul. We recently sat down with AI infrastructure wunderkind, Alexandr Wang, CEO and Founder of Scale AI, to discuss the current state of the industry and what is on the horizon. The ambitious twenty-five year old dropped out of MIT in 2016 to build a platform that provides full stack AI to autonomous vehicle companies, the Fortune 500, and the U.S. Government. Putting in the early grunt work building data foundations for the AI industry helped make Alex the youngest self-made billionaire in the world, as well as yielding him abundant insight regarding AI’s market structure, its use in national defense, and the democratization of content creation.
Positioning current breakthroughs on a historical trend in technology, Alex sees advancement of the same three components that facilitated older AI breakthroughs like Alexnet and AlphaGo. Computing, data, and algorithms continue to be the main thrusters of AI innovation–Moore’s law has held true, Graphic Processing Units (GPUs) continue to improve, and the internet provides ample amounts of data to feed algorithms that continually undergo further refinement. This year, progress has led to breathtaking advances in Large Language Models (LLMs) and Large Diffusion Models (LDMs), which are now able to understand and generate image, text, audio and video data. From Alex’s vantage, we have reached the point where AI has truly become a platform technology that is able to support massive innovation.
The introduction of core, stable platforms in Generative AI this year will affect market structure in the short and long term. Alex sees start-ups currently holding the edge in innovation over large companies because they are not as anxious about regulations and protecting their reputation. Big tech companies like Microsoft, Google, and Meta are not underestimating the power of AI though and have moved rapidly to integrate it into their products. In the long term, Alex worries that large enterprises might begin to replicate products successful startups introduce to the market, which might not leave enough time for the smaller companies to get up to speed and really compete. The bottom line is, start-ups in the AI space have a much more limited window to flourish than in other industries.
In the last six years, Scale has gone from a startup to over six hundred employees and a $7.3 billion dollar valuation. The company, which was hatched at the famous YCombinator, still strives to maintain a startup culture that executes ideas at warp speed. Scale currently offers sixteen different products that annotate, manage and evaluate, automate, and generate different types of data. Its roster of clients include Toyota, SAP, Microsoft, Pinterest, and the U.S. Army.
The launch of two new products, Scale Spellbook and Scale Forge, which can generate apps and product imagery in minutes, and do not require any coding by the user, makes it easy for small businesses to harness the power of AI. The democratization of Generative AI is a huge part of Scale’s mission, and user accessibility is what Alex predicts will drive AI’s market value in the long term. Since AI models are being developed in open source, front end user value and building strong relationships with customers is the market trend that Alex sees developing over the next decade. “One area that we’re really excited as a company to work with people on, is how do we help you figure out how to best use your data that is specific to your problem set and fine tune and customize these models to help serve your use cases?” Use cases that Scale is gearing up to tackle include easing the burden of rote tasks for workers, as well as applying AI to pressing global issues like climate change, energy efficiency, pharmaceuticals, and healthcare.
Removing barriers to Generative AI could also radically democratize the creative landscape. Alex foresees major disruption of film and television gatekeepers when all that will be required is a computer, the internet, and an afternoon to take an idea and create an animated storyboard. Instead of succumbing to the anxiety of AI sidelining creators, he suggests the technology may actually usher in a golden age of diversity in entertainment. Innovative content won’t carry the same financial risk when stories are already so fully realized before going into production.
Gatekeeping in the tech industry surrounding national security is also something that Alex has pushed back against at Scale. He sees naivete among his peers who think barring the application of tech inventions for national security use will stop the country’s opponents from replicating their technology and using it against us and our allies. Russia is currently using LLMs to wage a campaign of disinformation in Ukraine.
Witnessing Ukraine’s destruction through Scale’s deployment of algorithms that detect damage in civilian buildings has made it clear to Alex that there needs to be a paradigm shift in how the U.S. prioritizes its national defenses. “The United States needs to have the best AI technology to ensure peace.” Drones, autonomous systems, intelligence gathering, and cyber security have been at the forefront of the conflict in Ukraine, which are all technologies that would massively benefit from the application of higher quality AI. Alex sees the U.S. as currently being too focused on investing in large hardware systems like fighter jets and aircraft carriers. Getting the tech industry and the government on the same page has been difficult though. The government has not heavily invested in meshing its hardware with tools from the tech industry and employees at large tech companies have threatened to quit in response to proposed government partnerships. More geopolitical education in the industry, Alex thinks, could help inventors understand the danger of falling behind other countries in a cyber arms race.
Alex predicts conflicts in the next fifty years are going to be fought using technology that the United States has not yet built. Compounding the problem, China has also begun to rapidly advance in applying AI and Machine Learning to its defenses. Unimpeded by democratic values, the country’s authoritarian regime has accelerated the pace of AI development. The improvement of facial recognition software at the expense of China’s Uyghur muslim minority population, which has faced extreme prejudice and repression, is a good example of this. In both the short and long term though, Alex doesn’t believe innovation will thrive under an iron fist, which could limit China’s prowess.
Within the United States, AI ethics have largely been determined by the creators of the technology. Companies like OpenAI do not publish their work and have used their terms of service to decide who may use their technology and how it may be used. Currently, national security projects have been banned from the platform. Other platforms have taken a laissez-faire approach, which has sometimes resulted in tainted data polluting algorithms with bias. Using the institutions of our democracy to reach a middle ground, through policy making and regulation, is Alex’s answer to codifying the ethics of AI.
One of the keys to understanding the frontier of AI can be found in Alex’s Substack. In his post, “Betting on Unknown Unknowns,” which reflects on the nature of predicting the future, he writes, “There’s a peculiar thing where oftentimes it’s the wildly optimistic predictions that end up being right. Such predictions seem entirely crazy at the time—it feels like betting on everything going absolutely perfectly. On the contrary, they are betting on “unknown unknowns” which will meaningfully change the game.” The push towards achieving Artificial General Intelligence (AGI), or the ability of a machine to accomplish any intellectual task that a human can, is a case in point, Alex argues. Many were skeptical when OpenAI proclaimed back in 2015 that AGI would be achievable. The “unknown unknown” breakthrough of Transformer architecture at Google in 2017 though brought OpenAI’s powerful GPT3 platform to life in 2020, which the industry has recognized as an evolutionary step towards AGI. It’s these kinds of breakthroughs that may bring the next leap in the industry even if we are unable to immediately spot the landing.
This wild optimism did not get in the way of Alex telling us bluntly that Generative AI is currently overhyped. He thinks demoware and twitter shares are skewing people’s perception of where the technology is currently at: “It is very exciting technology, and it’s going to make its way through, but…it is going to take some time from here to actually result in real use cases that get a lot of adoption.”
What does the frontier at Scale look like? The data driven company is betting on itself as an “unknown unknown,” the Amazon AWS of AI.
We are predicting the unpredictable.
🛠️ Cool Tools
Almostmagic - A package that allows you to add AI to your app with one line of code. (link)
Latent Web Browser - A browser for web content generated by GPT-3. (link)
Clerkie - An AI in your terminal that helps you understand, brainstorm and debug long stacktraces/ tracebacks/ error messages. (link)
Ghostwryter - AI writer, content ideas generator and writing assistant made for Google Docs. (link)
GTM Strategies - Describe your startup and get AI generated go-to-market strategy ideas. (link)
Leonardo - Create stunning game assets with AI. (link)
CalorieCoach - A texting app to help you get in better shape by tracking what you eat throughout the day. (link)
Trudo - Intuitive UI to fine-tune, test, and deploy OpenAI models. (link)
Petals - Run 100B+ language models at home bit-torrent style. (link)
🎓 Learn
🔬 Research
Large-scale vehicle classification. (link)
Learning one abstract bit at a time through self-invented experiments encoded as neural networks. (link)
POMRL: No-regret learning-to-plan with increasing horizons. (link)
NIRVANA: Neural implicit representations of videos with adaptive networks and autoregressive patch-wise modeling. (link)
MAUVE scores for generative models: theory and practice. (link)
GPT takes the Bar exam. (link)
ResGrad: Residual denoising diffusion probabilistic models for text to speech. (link)
An E-book with AI research experiences from Harvard course, CS197. (link)
Dream3D: Zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models. (link)
New modes of learning enabled by AI chatbots: Three methods and assignments. (link)
👋 Too many links?! I created a database for all links mentioned in these emails. Refer 1 friend using this link and I'll send over the link database.
🤓 Everything else
Keep up to speed in AI with all relevant papers, podcasts, blogs, etc. with the AI reading list by Jack Soslow. (link)
Is ChatGPT really a “Code Red” for Google search? (link)
A thread of highlights of RewindAI. (link)
The WSJ columnists look ahead to an Apple headset, Netflix password crackdowns, the next DALL-E and more. (link)
A thread on interesting features of ChatGPT guessed based on minified JS code. (link)
AI Rewind 2022 - What can AI really do? A year in review. (link)
A thread highlighting community contributions and projects with Stable Diffusion in 2022. (link)
Four big ideas for enterprise tech to tackle in 2023 by a16z. (link)
An epic AI Debate—and why everyone should be at least a little bit worried about AI going into 2023. (link)
AI Artists are CTO's (timing your AI art projects). (link)
Graph ML in 2023: The state of affairs. (link)
🖼 AI images of the day
🤗 Share Ben's Bites
Send this to 1 AI-curious friend and receive my AI project tracker database!
or copy/paste this link: https://bensbites.beehiiv.com/subscribe?ref=PLACEHOLDER
👋 See ya
⭐️ How did we do?
How was today's email? |
Reply