- Ben's Bites
- Posts
- The near future of AI is becoming clearer
The near future of AI is becoming clearer
Ethan Mollick discusses how Large Language Models (LLMs) like ChatGPT and the much-anticipated Google Gemini are evolving beyond mere text-based interactions. These models are integrating vision and voice capabilities, not just brains, making them more versatile and impactful.
What's Going on Here?
We're at a pivotal point in AI's evolution where these models are not only becoming increasingly intelligent but also increasingly capable in tasks involving vision and voice.
What Does This Mean?
We’re going beyond chatbots. These multi-modal AIs have the potential to not just assist but also collaborate in a way that amplifies the human working with them. For example, they can improve remote work by better interpreting emotional cues during video calls or fine-tune online educational platforms according to individual learning paces.
The changes these AIs could bring aren't restricted to one area. They have applications across sectors like healthcare, education, and transportation. The phrase "jack of all trades, master of none" might soon be obsolete. The future LLMs could be both—jacks of all trades and masters at them.
Why Should I Care?
AI’s new multi-modal capabilities mean they can fit into niches we haven't even considered yet. These AIs are now able do many of your tasks that need data analysis, creativity, or problem-solving, while being really good at them. If you are in these jobs, you need to evolve with AI. Keeping the job aspect aside, the ethical implications are also huge. While Google and OpenAI have placed some ethical guardrails, loopholes exist.
Reply