OwlBrief

Stay informed, stay wise!

OwlBrief distills the world’s top news into fast, AI-crafted briefs. Stay informed, save time, and get smarter — before your coffee gets cold.

Create account Log in
#AI & ML
TechCrunch
TechCrunch
2w ago 27 views

Why Cohere's ex-AI research lead is betting against the scaling race

Sara Hooker, former VP of AI Research at Cohere, has launched Adaption Labs to explore alternatives to scaling large language models, which may be reaching their limits. This shift is significant as it could redefine AI development and efficiency.
Why Cohere's ex-AI research lead is betting against the scaling race
A What happened
Sara Hooker, previously the VP of AI Research at Cohere, has launched a new startup called Adaption Labs, which aims to develop AI systems capable of continuous adaptation and learning from real-world experiences. This move comes as many in the AI community express concerns that the current trend of scaling large language models (LLMs) may be reaching its limits, with diminishing returns on performance. Hooker believes that the focus on scaling has led to inefficient methods that fail to produce truly intelligent systems. Adaption Labs seeks to challenge this notion by proving that AI can learn from its environment more efficiently. The startup is in the process of raising funds and aims to broaden access to AI research globally, continuing Hooker's commitment to diversity in the field. If successful, this approach could significantly alter the landscape of AI development, moving away from the costly scaling of models towards more adaptive and efficient learning methods.

Key insights

  • 1

    Scaling Limitations

    Current AI scaling methods may be reaching performance limits.

  • 2

    New Learning Approaches

    Adaption Labs aims to develop AI that learns from real-world experiences.

  • 3

    Cost Efficiency

    Hooker believes adaptive learning could be more cost-effective than scaling.

Takeaways

The establishment of Adaption Labs by Sara Hooker signals a potential shift in AI research, emphasizing the need for adaptive learning over traditional scaling methods. This could lead to more efficient and powerful AI systems, reshaping the industry's approach to AI development.