Last updated on November 26th, 2022 at 03:24 pm
Edge computing is changing the way we store, process, analyze and transmit data generated by billions of IoT and other devices.
The initial goal of edge computing was to reduce the bandwidth cost of transporting raw data from where it was created to the corporate data center or cloud. Recently, however, the concept is evolving into a technology that supports real-time applications that require minimal latency, such as autonomous vehicles and multi-camera video analysis. The worldwide proliferation of 5G wireless standards is closely related to edge computing as 5G will enable faster processing for these advanced low-latency applications.
Edge Computing Definition
Gartner defines edge computing as ‘the part of a distributed computing topology where information processing is located close to the edge where information is generated or consumed’.
At its most basic level, edge computing places computing and data storage close to the devices that collect it, rather than relying on central data centers thousands of kilometers away. By doing this, the data (especially real-time data) does not suffer from latency issues that can affect the performance of the application. In addition, enterprises can reduce costs by reducing the amount of data that must be transferred to a central data center or cloud base.
Think of a device that monitors manufacturing equipment in a factory, or an Internet-connected video camera that transmits real-time video from a remote office. A single device generating data can easily transmit data over a network, but problems arise when the number of devices simultaneously transmitting data increases. Instead of a single video camera transmitting real-time video, consider hundreds or thousands of video cameras. Not only does latency result in poor quality, but bandwidth costs can be astronomical.
Edge computing hardware and services help solve this problem by providing the field with compute and storage resources for many of these systems. For example, Edge Gateway can transmit only meaningful data to the cloud after processing the data received from the edge device. If a real-time application is required, the processed data can be retransmitted to the edge device.
5G and Edge Computing
It is possible to deploy edge computing on other networks besides 5G, such as 4G LTE, but conversely, it is not always the right answer for enterprises to utilize 5G in addition to edge computing. In other words, it is difficult for enterprises to reap real benefits from 5G unless they have an edge computing infrastructure.
Dave McCarthy, research director for edge strategy at IDC, said: “5G by itself reduces network latency between endpoints and base stations, but does not address the distance to the data center. This can be a problem for latency-sensitive applications.”
Related: Data Management platform: Best 15 major DMP platforms in 2022
Mahadiv Satyanarayanando, professor of computer science at Carnegie Mellon University (CMU) and first co-author of a 2009 paper that laid the foundations for edge computing. “If you need to communicate with a data center in your country or on the other side of the globe, would even 0 milliseconds make a difference at the last minute?” agrees with McCarthy.
The relationship between edge computing and 5G will continue to be linked as more 5G networks are deployed, but enterprises can still deploy edge computing infrastructure when needed through a variety of network models, such as wired networks or Wi-Fi. However, edge infrastructure is more likely to use 5G networks because of the high speeds 5G provides, especially in rural areas where wired networks are not available.
How edge computing works
The physical architecture of the edge can be complex, but the key is to connect client devices to nearby edge modules for responsive processing and smooth operation. Edge devices could include IoT sensors to employees’ laptops and smartphones, security cameras or even internet-connected microwaves in office break rooms.
In an industrial environment, an edge device could be an autonomous mobile robot, or a robotic arm in an automobile factory. In the medical sector, it could be a state-of-the-art surgical system that allows doctors to perform operations from remote locations. The edge gateway itself is considered an edge device inside the edge computing infrastructure. Because the terminology varies, this module can also be referred to as an edge server or edge gateway.
While many service providers wanting to support edge networks will deploy many edge gateways or servers, companies looking to deploy private edge networks should also consider this hardware.
How to Deploy Your Own Edge Computing System
There are many ways to purchase and deploy edge systems. On the one hand, a company may want to handle many processes on its own. This will include selecting edge equipment from hardware solutions vendors such as Dell, HPE and IBM, configuring networks for the purpose, and purchasing management and analytics software. Although there is a lot of work to be done and the IT department requires a significant level of expertise, it is still an attractive option for large enterprises looking for a completely custom edge deployment.
Related: Understanding TensorFlow
On the other hand, solution providers specialized in a specific industry are increasingly promoting managed service-type edge services. The solution provider installs and manages the infrastructure such as hardware, software, and networking, and the company can pay for usage and maintenance. IIoT services from vendors such as GE and Siemens fall into this category. Although this approach has the advantage of being easy and relatively trouble-free in terms of deployment, such managed services may not be applicable to all use cases.
Featured Edge Computing Use Cases
As the number of internet-connected devices continues to grow, so are the use cases for cutting costs or taking advantage of low latency with edge computing.
Verizon Business, for example, is using lifetime quality control processes for manufacturing equipment, building a pop-up network ecosystem where real-time content is streamed with sub-second latency using 5G networks, and crowds in public places using edge-enabled sensors. Health and safety improvement by providing detailed images of Logistics and digital twin technology create sophisticated product quality models to present various edge computing usage scenarios, such as expanding insights from manufacturing processes.
Solutions for edge computing depend on the type of deployment. Industrial sites, for example, will prioritize reliability and latency, requiring robust edge nodes and dedicated communications links (private 5G, dedicated Wi-Fi networks, wired connections, etc.) that can operate in harsh factory environments. Smart farms also require robust edge equipment for outdoor deployments, but networks may vary. Although low latency is required to coordinate the movement of heavy equipment, environmental sensors are likely to have wide range and low data throughput. LP-WAN connectivity, Sigfox, etc. are the best choice for this application.
As the use case changes, so does the problem. Retailers can use edge nodes as their own clearinghouse for a variety of functions. Link Point Of Sale (POS) data with targeted promotions and track movement traffic and more for integrated store management applications. Here, the network can be as simple as its own Wi-Fi that connects all devices, but can be more complex, such as using Bluetooth or other low-power connections to track and promote traffic and use Wi-Fi for POS and self-checking. have.
Edge Computing Benefits
Many businesses can benefit from deploying edge computing with cost savings alone. Enterprises that initially adopted the cloud for many applications are looking for cheaper alternatives due to the higher-than-expected cost of bandwidth. Edge computing may be a good fit for these businesses.
However, the biggest benefit of edge computing is its ability to process and store data faster to more efficiently support real-time, business-critical applications. Before edge computing, smartphones that scan human faces for facial recognition had to run facial recognition algorithms through cloud-based services, which took a lot of time to process. The edge computing model allows algorithms to run locally on an edge server or gateway or even on the smartphone itself.
Applications such as AR/VR, autonomous vehicles, smart cities, and building automation systems require this level of rapid processing and response.
Edge Computing and AI
Many solution vendors, including Nvidia, continue to develop hardware that recognizes the need for additional processing at the edge, including modules with built-in AI capabilities. NVIDIA’s latest solution includes the Jetson AGX Orin Developer Kit, a compact, energy-efficient AI supercomputer for developers of robotics, autonomous machines, and next-generation embedded and edge computing systems. Compared to its predecessor, Xavier, Orin has significantly improved performance and AI-related acceleration.
AI algorithms require a large amount of processing power to run on cloud-based services, but AI chipsets that can perform these tasks at the edge are also developing rapidly, so the number of edge computing systems for AI will continue to increase.
Privacy and security concerns
From a security point of view, data at the edge can be a problem, and the risk is even greater when the data is being processed by a variety of devices that may not be as secure as a central data center or cloud-based system. As IoT devices proliferate, IT must understand potential security issues and be able to secure these systems. This includes encrypting data, introducing access control methods, and tunneling VPNs.
Additionally, various device requirements for processing performance, power, and network connectivity can affect the reliability of edge devices. Therefore, managing availability and failover management is essential for devices processing data at the edge to ensure that data is properly served and processed when a single node goes down.