About LeadingEdge
Vision
The LeadingEdge project will deliver a novel and holistic framework to efficiently cope with unresolved challenges in edge computing ecosystems, regarding dynamic resource provisioning to multiple coexisting services amidst unknown service- and system-level dynamics. The project approach is three-faceted; it will optimize intra-service resource provisioning, inter-service resource coordination, and user perceived quality of experience (QoE).
Aim of the project
- First, at service level, we will develop a framework, grounded on first principles, for opportunistic use of edge and cloud computation, bandwidth and cache resources according to instantaneous resource availability, mobility, connectivity, service resource requirements and service demand. Our approach will rely on solid online-learning theories such as online convex optimization (OCO), and transfer learning and it will eliminate our inherent inability to predict demand, mobility, and other dynamic processes that affect resource allocation. It will also use extreme-value theory and stochastic optimization towards a full-fledged study of the latency-reliability trade-off that is fundamental for mission-critical services.
- At a second level, we will develop a system-level AI-empowered service orchestrator based on reinforcement learning and context awareness for service orchestration in terms of network slicing and service chain placement, such that instantaneous service-level requirements are fulfilled. The OpenAirInterface.org (OAI) and Mosaic-5g.io software platforms will be used as real-time experimentation environments with full 4G/5G functionalities for service orchestration to place services, direct traffic from users to servers, and measure latency and other QoE metrics.
- Finally, at user level, we will leverage the community-network infrastructure of guifi.net as an edge network to deploy services at scale in a controlled manner and to directly measure their impact on user QoE. The outcome of these latter user-level studies will be continually fed back to and guide the service- and the system-level optimization.