Fog computing vs edge computing: what’s the difference, and which one to choose?

Fog computing and edge computing are two emerging computing models designed to overcome the bandwidth challenge associated with IoT and cloud architectures. In this article, we’ll compare the two to help you decide which one to choose.

The Internet of Things (IoT) is disrupting the way we live and work. In 2020, the number of connected devices worldwide was estimated at 20 billion, and that number is expected to grow to 30+ billion by 2020. As the number of connected devices grows, so too does the demand for greater bandwidth. The IoT’s reliance on cloud data centers for compute and storage is creating a bottleneck, which is forcing IoT developers to look for alternatives to cloud-centric IoT architectures.

Edge computing and fog computing are two emerging computing models designed to overcome the bandwidth challenge associated with IoT and cloud architectures. In a nutshell, fog computing is a model in which IoT devices and sensors send data to a centralized server, which then processes and analyzes the data. Edge computing, on the contrary, involves storing and processing data locally. Let’s consider both in more detail.

Fog computing

The term “fog computing,” coined in early 2010s by Cisco, refers to a paradigm where some resources and transactions are moved from the Cloud to localized network access points such as routers, thus reducing the amount of bandwidth required to send the data to the cloud.

In addition to lower latency, such decentralization reinforces security, as it reduces the amount of sensitive data transmitted across wide-area networks. It also makes systems less vulnerable to cascade failures, blackouts, and other systemic disruptions. This is especially relevant in industrial IoT applications, where downtime can lead to millions of dollars in losses.

Finally, it frees up the bandwidth required to support cloud connections for other tasks, thus enhancing the overall experience for the end user.

Fog computing faces a fair share of challenges as well. For one, fog computing requires a great deal of coordination between network operators and IoT developers. This introduces additional complexity into the development process, and slows down time to market.

Furthermore, fog computing requires IoT developers to redesign their devices and applications. This, in turn, increases the cost of development and maintenance.

Finally, fog computing requires a great deal of bandwidth. This is partly mitigated by the rollout of 5G networks, which will increase bandwidth considerably. However, it will still be a while before 5G reaches widespread deployment.

Edge computing

Edge computing refers to a paradigm in which data processing and data storage occur locally at the data source (the “edge” of the network). Unlike fog or cloud computing, edge computing is not reliant on any centralized server, so it can continue to function even when connectivity is interrupted.

Edge computing is especially relevant for latency-critical applications, such as autonomous vehicles. In such applications, latency of even a few milliseconds can result in catastrophic consequences, which is why edge computing is gaining traction in the automotive industry. It is also the only viable computing model in remote and rural areas, or during natural disasters, when connectivity cannot always be guaranteed.

The biggest challenge with edge computing is its limited computing capabilities. Naturally, the processing power of edge devices is considerably lower than that of centralized servers. This requires IoT developers to design their edge computing devices and applications with a heavy emphasis placed on efficiency and optimization.

A natural consequence of this is that edge devices need to be focused on a single, well-defined task. For example, it might be hard to make a “smart chip” in an autonomous agricultural vehicle to both classify crops and detect water — this would call for separate edge computing devices, which can affect power consumption and dimension requirements.

Edge might also not be the perfect choice where inter-device coordination is a major factor for the task at hand. For example, in smart city applications, edge computing might make sense for applications such as smart parking, but less so for applications such as traffic management.

Which is better, fog or edge computing?

Obviously, there is no one-size-fits-all solution for IoT applications. The optimal architecture for a given IoT application depends on a number of factors, including latency requirements, bandwidth constraints, and the application’s functional requirements.

Edge computing is best suited for applications that require real-time data processing and low latency, such as autonomous vehicles. It is also ideal for applications that require uninterrupted connectivity, such as remote and rural areas.

Fog computing, in turn, is best suited for applications that can tolerate some latency as a price for higher power, such as industrial IoT applications. It is also ideal for applications where central data storage is a natural approach, such as smart homes, smart cities, and industrial applications.

Edge might also not be the perfect choice where inter-device coordination is a major factor for the task at hand. For example, in smart city applications, edge computing might make sense for applications such as smart parking, but less so for applications such as traffic management.

Fog vs edge computing checklist

To make it easier for you to decide which computing model is right for your IoT application, we’ve put together this checklist. The first five questions are in favor of fog computing, while the last five are in favor of edge computing.

QUESTIONS IN FAVOR OF FOG COMPUTING

  1. Does your application require high computing capabilities?
  2. Is central data storage a natural approach for your IoT application?
  3. Does your application involve a lot of coordination between devices?
  4. Do you need the device to perform different tasks at once?
  5. Are you willing to invest into infrastructure development?

QUESTIONS IN FAVOR OF EDGE COMPUTING

  1. Is latency critical for your application?
  2. Should your application be resistant to connectivity disruptions?
  3. Can your application be reduced to a single, well-defined task?
  4. Does your application need to be extremely power efficient?
  5. Does your application require low implementation and maintenance costs?

Now count up the Yes’s in favor of fog and edge computing, respectively. If the difference between the counts is 3 or more (e.g. 5 for fog computing, 2 for edge computing), you have a clear winner. If not, you’ll have to dig deeper (we’ll be happy to assist if needed).

The future of fog and edge computing

As IoT proliferates, fog and edge computing will continue to gain traction. Whether edge or fog computing will end up being the dominant computing paradigm remains to be seen. One thing is for sure, though: IoT developers will need to master both to remain competitive.

If you found this article useful, make sure to check out our other articles as well:

Subscribe to our newsletter

No spam. We promise.

I agree to have my personal information transfered to MailChimp ( more information )

Leave a Comment