Analog compute for Edge AI: the Holy Grail or a road to nowhere?

Using analog approaches for AI applications has long tickled the minds of computer scientists and engineers. After all, our brain is an analog computer — one that’s orders of magnitude more efficient than any digital computer ever built, — so why not try to replicate its workings in silicon?

Recently, startups such as Mythic have emerged, promising to bring analog AI compute to the edge, but the road to mass-market success is fraught with challenges which may prove insurmountable, at least in the near future.

Quick and dirty

The first thing to understand about analog AI is that it’s very different from digital AI. In digital AI, everything is represented by a series of 0s and 1s (bits), which are manipulated by logic gates according to mathematical rules. In analog AI, on the other hand, information is represented by voltages or currents, which are manipulated by transistors according to physical laws. This may not seem like a big difference, but it is.

For one thing, it means that analog AI is much less precise than digital AI. While digital computers can represent numbers to any desired degree of accuracy, analog computers are like your grandpa’s stereo amplifier: You can make it “louder” or “softer,” but you can’t make it “1 notch louder.”

For an amplifier this can even be a good thing, since it allows for smoother volume control, but for a computer that’s meant to, say, make decisions on whether your self-driven car should turn left or right at an intersection, it’s not so great. My scientific wild-ass guess is that analog AI will never be more than 3–4 bits precise.

(You could theoretically make an analog chip precise, but that would require making it much larger than the transistors we’re used to in digital computers. The equipment you’d need to contain it would be a fridge-sized something that drains electricity like a vampire. So much for power efficiency!)

Noise is your enemy

A direct consequence of the low precision of analog AI is that any noise — which is always present in any electronic device — will accumulate as the signal propagates through the system. The more layers of processing you have, the more noise accumulates.

As modern  machine learning algorithms rely on neural networks that are often hundreds of layers deep, using analog AI for such algorithms is not really an option. This limits you to really shallow neural networks, which are not very useful for anything but the most basic applications.

(The issue can be mitigated through regularization — that is, by training the network to be robust against noise — but that comes at a cost: You need more data, you need longer training times, and even then you won’t be able to regularize every network architecture out there.)

Yield is for wimps

Another challenge with analog AI is that it’s very difficult to manufacture chips that work as intended. Imagine trying to build a computer chip that can emulate the workings of your brain. Now imagine trying to do that using standard silicon fabrication techniques, which are designed for digital computers.

The way artificial neural networks work, for example, is by “connecting” a bunch of neurons together in a pattern that resembles the way your brain works. But to do this, you need very precise transistors and other components. If those components aren’t manufactured accurately, the network will “diverge,” i.e. it will start doing things that don’t resemble the original pattern at all.

Of course, one can say that yield is just a numbers game, and that as long as the end product is usable, who really cares how it was made? That might be true for prototype systems, but it’s not going to be the case for mass-market products.

So what’s in store?

The short answer is that we don’t know. As it stands, analog AI is in a similar position to quantum computing or optical computing: Lots of potential, but many challenges to overcome before it can become mainstream.

Right now, one could probably use analog AI for niche solutions, such as battery-powered fitness trackers and motion/occupancy sensors. When you bury a sensor in the asphalt to make it detect whether a car is parked in a spot or not, for example, you don’t really need a hell of a lot of precision.

But for more demanding applications, such as self-driving cars or consumer robots, analog AI is not yet up to the task. When will it be? Again, my scientific wild-ass guess is 10–15 years — on the same order of magnitude as quantum computing or optical computing.

But I would love to be wrong — and maybe one day we’ll see a literal brain in a box, running on a few watts of power and beating digital computers at their own game? Now that would be something.

Subscribe to our newsletter

No spam. We promise.

I agree to have my personal information transfered to MailChimp ( more information )

Leave a Comment