Out of the playpen: Why it’s time for AI to leave the cloud

Cloud. It’s such a comfy, cozy, familiar place. It’s where we store our pictures and music, backup our data, and watch cat videos on YouTube. It’s only natural that we’d want to use it for other things, like artificial intelligence.

And why not? Data is the lifeblood of AI, and the cloud has a lot of data. It also has a lot of compute power. And it’s easy to use: just point, click and deploy. We’re so used to the convenience of the cloud that the mere thought of not using it feels like a step backward.

I mean, what are the alternatives? AI on the edge? Like, physically printing out a neural network and installing it in a toaster? What next, growing our own silicon chips in the backyard? That’s just dumb, right?

Except, well, it’s not.

Many of you already know the usual arguments against cloud AI: latency, connectivity, privacy. But we want to talk about something more foundational; existential, if you will. And it boils down to this:

AI is out of the nursery.

See, when we were just getting started with AI, every new step, each new milestone felt like a gigantic achievement. Neural machine translation? Awesome. An emulated self-driving car? Nothing less than amazing. An AI that generates text indistinguishable from the best human writers? Holy cow.

When you are at the “wow” stage of any technology, you marvel at the result, not the process. Who cares if it takes 350 server-years and 10 million dollars to train a neural network to be able to do something that impressive? We just want to see what it can do!

The thing is, we’re not at the wow stage anymore. AI is no longer a toddler, and we should not be just paying for the diapers while clapping at its first attempts at walking. We’re at the point where we should be asking why it still struggles to walk.

“What does any of this have to do with the cloud?” you might say.

And the answer is, the cloud is the playpen that keeps AI stuck in the diaper phase.

Is our AI too heavy? Let’s just throw more servers at it! Is it too slow? Let’s give it more bandwidth! Does it fail to work with real-world data? Let’s pretend the real world doesn’t exist!

But, as any parent will tell you, the more you give a toddler, the more that toddler wants. And technology is no different. I mean, look at Chrome.

Worst of all, throwing more resources can only get you so far. No servers in the world can make a self-driving car respond to an unexpected pothole in real time or allow a demining robot to remove a landmine in a place with no connectivity. And these are the tasks that we do need AI to be able to perform.

Now I know. “Edging” a neural network is hard. You can’t just dump some Python code onto an ASIC; the diversity of neural networks it will support is incomparable to what we’ve got so used to with the cloud; most importantly, you need not just machine learning expertise but also engineering expertise. And that’s hard to come by.

But, once edged, AI will become truly disruptive. The first companies to fully leverage AI on the edge will be the front-runners of the next phase of technology: robotics, autonomous vehicles, IoT, medical devices. Not to mention use cases in that half of the world where there’s simply no Internet and that many of us prefer to turn a blind eye to.

Now, I’m not breaking any new ground here. Companies like Baidu, Google, and Microsoft are already running AI workloads on-device, and startups like Graphcore, SambaNova and us here at Edged.ai are building either specialized hardware or intellectual property (IP) to make AI on the edge a reality.

But, as with so many things, the real disruption does not happen when a few companies do it. It happens when there is a shift in the industry’s mindset.

And, right now, that mindset seems to be stuck in the cloud playpen.

Isn’t it about time we let it out into the real world?

Subscribe to our newsletter

No spam. We promise.

I agree to have my personal information transfered to MailChimp ( more information )

Leave a Comment