Edge supercomputing to transform the great data deluge

Article By : Veerbhan Kheterpal

With the explosion of IoT technology and sensors over the past decade, one thing has become clear: there is currently no easy way to manage all the data...

With the explosion of internet of things (IoT) technology and sensors over the past decade, one thing has become clear: there is currently no easy way to manage, much less leverage, all the data continuously generated by connected devices. Yet realizing the promise of artificial intelligence (AI) hinges on using the right data for real-time decision making. As things now stand, we have the data, but we can’t really learn from it yet.

Part of the challenge is the sheer number of IoT devices churning out a deluge of data. Verizon estimates that there are more than one million connected devices per square kilometer, which seems implausible until you start counting.  From smartphones to security cameras, medical devices to agricultural sensors, IoT devices are everywhere.  To get a sense of how much data these devices produce, consider this: according to Verizon, a single connected car generates more data than all of Facebook on any given day. Multiply that level of data output times all of the connected devices, wireless sensors and industrial robots deployed worldwide, and it’s easy to envision a tsunami of data swamping our ability to make timely decisions.

Where does all this data go? An estimated 80 percent of edge data is lost because it simply cannot be transmitted to the cloud for processing due to bandwidth, privacy, latency or cost reasons. To leverage the promise of AI, we must radically improve networking and computational efficiencies. Real-time decision making with IoT data requires ultra-fast connectivity and computing – like neural pathways in our brains, but faster, smarter and more reliable.

Existing cloud computing and networking technologies are not optimized to handle the massive amount of edge data generated by IoT devices. High-performance servers deployed in large-scale data centers consume too much power and are unwieldy to deploy close to the edge. Fortunately, there are solutions to this data problem: we can add more computational intelligence to the edge instead of the other way around.

Next-generation growth in computing infrastructure will not occur in the data center. It will arise at the edge – on-premises and on-device. While there are several terms for intelligence at the edge (fog computing, low cloud, high cloud, etc.), we mean everything outside the data center. According to Forrester Research, growth in edge computing is underway and driven by the following factors:

  • Rapid expansion of the IoT and machine-to-machine connectivity
  • Sophisticated algorithms and new applications, such as AI, machine learning, neural networks, autonomous vehicles and virtual/augmented reality, that require low latency and high reliability
  • An increasingly mobile and distributed workforce
  • Bandwidth and connectivity limitations impacting cloud computing
  • The high cost of data transit and storage
  • Evolving data privacy requirements

For investors paying close attention to industry trends and market dynamics, there’s no doubt unicorns will be built on the back of edge computing and edge server technologies. The next decade will see innovations in computing “outside” the data center. We will see the rapid rise of a new paradigm: edge supercomputing. The following figure illustrates the tradeoffs of computing infrastructure characteristics as we move away from the data center model and closer to intelligent, computationally powerful field devices.

The computing infrastructure tradeoffs to consider as we move away from the data center model and closer to intelligent, computationally powerful field devices (Image: Quadric)

As we move closer to edge devices in the field (think automotive, industrial IoT equipment and medical devices), the time to market and investment needed to embed high-performance compute capabilities into those devices will rise significantly. Real-time applications such as autonomous driving will require onboard computing resources.

On the other hand, bandwidth-constrained applications can be handled efficiently by adding on-premises servers or edge data centers. This shift toward edge computing will require a reimagined IT strategy that considers:

  • Extended dev-ops. Deliver dev-ops that are not confined to the cloud but extend to edge devices and everywhere in between.
  • Support and operations realigned for the edge. Provide software support that spans well beyond x86 CPUs and CUDA GPUs to architectures that are better suited for edge or embedded servers. Since algorithm workloads are ever-evolving, it’s critical to deploy flexible hardware architectures to run different types of workloads in multi-tenant environments.
  • Reprioritizing capital allocation. Consider investments in deploying on-premises edge servers or increasing edge data center capacity.

Adding compute-near-data capabilities to today’s operational architectures is as critical as adding cloud compute capabilities was in recent years. The benefits of deploying edge supercomputing across multiple industries will be astounding as real-time decision making for machines becomes a reality, ushering in a world of possibilities we’ve yet to conceive of and enabling the many innovations we’ve been patiently waiting for.

— Veerbhan Kheterpal, is CEO of Quadric.

Leave a comment