Aussie AI

Edge Computing

  • Last Updated 8 December, 2024
  • by David Spuler, Ph.D.

Edge Computing is the name researchers use for running computations on various low-resource devices. The devices on the "edge" are "close" to the user, but "far away" from the bigger servers in the cloud. The goal is therefore to run machine learning code on these smaller devices. Examples of such edge devices include:

  • Smartphones (see AI Smartphones)
  • Desktops and laptops
  • Cars (e.g. autonomous self-driving cars)
  • Video cameras (e.g. security cameras)
  • Internet of Things (IoT) devices (e.g. industrial devices, refrigerators, network stations, etc.)

Running AI models on edge devices usually means inference only, because the small devices usually cannot support the cost of training in terms of processing power and/or storage. However, there is some research into "on-device training."

Many architectures that use edge computing involve multiple machines, with at least two being the edge device and a main server. Hence, much of the research into ensemble methods such as distributed inference is also relevant.

Survey Papers on Edge Computing

Research on Edge Computing

There are plenty of papers on edge computing to choose from:

Hybrid Edge-Cloud Architectures

A hybrid architecture is where some processing is done on edge devices (e.g., PCs or security cameras), and some is passed up to the cloud for more powerful processing. The "Apple Intelligence" architecture is a prominent example now, with some processing done "on-device" for iPhone and Macs, and some passed up to the cloud.

More AI Research

Read more about: