Machine Learning at the Edge

Spectacular advances in machine learning have empowered innovation in a wide variety of application domains (e.g. image recognition, pattern recognition, sensor fusion, …). A clear trend however is the growing complexity of these models, making them less suitable for massive distributed deployment as targeted in many IoT scenarios.

Solutions where the cloud plays an important role are intensively studied worldwide. However, purely centralized cloud-based setups incur too high latency and bandwidth dependencies for edge devices that rely on real-time processing of sensor data. Instead, our aim is to realize intelligent services on distributed execution platforms as close as possible to the data sources and users at the network.  Our overarching research direction is to realize such intelligence under uncertainty and dynamism of resource availability, e.g. varying number of devices in the vicinity, variations in wireless throughput, failing remote sensors. Targeted platforms include wearables, head-up displays, mobile devices, gateways and robots. Edge cloud nodes can deliver additional resources with low latency and high throughput.

Our research team is currently focusing on:

  • intelligence with spatially distributed input/output The Internet-of-Things exists of many distributed sensors and actuators that however operate on a sense-analyze-react loop that is purely on-device. Realizing smart environments requires the co-operation and integration of all these devices in a unifying intelligence.
  • deep learning on resource-constrained devices Generalization is one of the most attractive aspects of deep learning. We research how to implement these computationally-demanding techniques on the device, for example on the embedded GPU or its neuromorphic chip. This research involves a trade-off between computing power and fidelity, which should not necessarily be done at design time.
  • edge-cloud assisted services Edge clouds, sometimes referred to as cloudlets or fog, are a vital cornerstone for latency-sensitive or data-intensive services. Resource and application management is complicated owing to the specific and fast-changing performance requirements of typical edge-cloud services.

 

Staff

Bart Dhoedt, Pieter Simoens

Researchers

Steven Bohez, Elias De Coninck, Sam Leroux, Piet Smet, Pieter Van Molle, dr. Bert Vankeirsbilck, dr. Tim Verbelen

Projects

Key publications