What’s the problem with Big AI, and why are Small Models the answer?
Right now, most of us are using the chat interfaces of big tech companies. Worse, many of us are building applications around these large models!
🔴 This is a problem because Large Language Models (LLMs) are slow, expensive and unsustainable. The more we rely on them, the more we fuel systems that:
⚠ Worsen climate change ⚠ Centralise power in the hands of a few (unethical) corporations
There’s more, but I recommend reading “The Empire of AI” by Karen Hao and following Karen Hao and Chiara Gallese, Ph.D. if you want to learn more.
There is an alternative to large models: “small language models” (SLMs). Small models are faster, cheaper, and far more sustainable. They can also be tweaked and run on localised hardware.
You don’t sacrifice anything with the move. The only thing you need to do is work a little differently. Once you do that, the benefits are more than just ethical. You’ll get:
✅ Higher degree of accuracy ✅ Greater level of transparency of the output ✅ Greater control over the output ✅ Cheaper, faster
👉 So, what is this alternative way of working? It’s simple:
1⃣ Think like an “architect” 2⃣ Break your interactions into smaller, easier pieces 3⃣ Build workflows with these pieces 4️⃣ Make transparency and “tools” first-class citizens
This involves learning some hacks to learn how AI “thinks”. It may also involve learning how to use a tool like n8n to automate certain steps.
For those who want some “in-person” guidance, I’ll be doing a workshop with Slobodan Manić for Experimentation Elite’s “Space Academy” event on 6th May in London.
The limited seats for this workshop. This helps us better facilitate learning.
I also have a tutorial series with Convert.com that covers much of this.