Les missions du poste

Établissement : Université Paris-Saclay GS Informatique et sciences du numérique École doctorale : Sciences et Technologies de l'Information et de la Communication Laboratoire de recherche : Laboratoire Interdisciplinaire des Sciences du Numérique Direction de la thèse : Lila BOUKHATEM Début de la thèse : 2026-10-01 Date limite de candidature : 2026-05-12T23:59:59 L'émergence des réseaux de sixième génération (6G) ouvre des opportunités sans précédent pour fournir une connectivité intelligente, économe en énergie et durable à travers des infrastructures diverses et hétérogènes. Cela sera rendu possible grâce à l'intégration d'un accès radio ultra-dense, à un contrôle natif basé sur l'intelligence artificielle, ainsi qu'à une informatique distribuée entre les terminaux, les noeuds de périphérie (edge) et les centres de données cloud.

Bien que les systèmes 5G aient permis des améliorations notables de l'efficacité énergétique par rapport à la 4G, ces gains ont été dans la pratique peu impactants à cause de croissance rapide du trafic de données, du nombre d'appareils connectés et de la disponibilité continu des services. En conséquence, la consommation énergétique globale a continué d'augmenter. Cela met en évidence la nécessité de nouvelles avancées pour la 6G, qui devrait placer la durabilité au coeur de sa conception et de son fonctionnement, en allant au-delà de l'optimisation de l'efficacité pour réduire activement l'impact environnemental de l'ensemble de l'écosystème numérique.

Dans ce contexte, le concept de réseaux Green Edge-Cloud s'impose comme une solution clé pour concilier performance numérique et durabilité. Il vise à optimiser l'utilisation des ressources, à améliorer l'efficacité énergétique et à réduire les émissions de carbone en intégrant les considérations environnementales au coeur de la conception et du fonctionnement des infrastructures.Ce projet de thèse s'articule autour de la question suivante : comment les infrastructures edge-cloud de la 6G peuvent-elles atténuer leur impact environnemental tout en maintenant, voire en améliorant, les performances des systèmes ? Plus précisément, il étudie comment les émissions de carbone, les performances réseaux (telles que la latence) et les coûts opérationnels peuvent être modélisés conjointement dans un cadre unifié pour les systèmes distribués, et comment le problème multi-objectifs résultant définit le compromis optimal à convoiter. Il explore également si des méthodes basées sur l'intelligence artificielle, telles que l'apprentissage par renforcement, peuvent surpasser les techniques d'optimisation classiques pour identifier des politiques de planification efficaces en temps réel dans des environnements à charges dynamiques. Enfin, l'étude examine comment différentes caractéristiques de charge de traffic influencent les décisions optimales d'allocation des ressources et cherche à quantifier le coût carbone associé aux communications dans les futurs environnements réseau 6G.
1) Defining a holistic architecture and framework
We will proposal an energy-centric architecture that integrates edge, radio access network (RAN), and cloud systems. This architecture will tightly integrate the radio access network, edge computing nodes, and cloud data centers into a unified system. Unlike traditional designs where communication and computation are optimized independently, this framework treats energy as a core system variable across all layers and domains.
The architecture will extend concepts from Mobile Edge Computing and Cloud Computing by incorporating the RAN as an active participant in computation and energy management. In this continuum: RAN nodes will provide both connectivity and lightweight computation, Edge nodes host latency-critical services and AI inference, and cloud data centers perform large-scale processing and global optimization. These components will be coordinated through a common orchestration framework rather than operating as isolated silos.
2) Carbon- and Energy-Aware Data Centers Integration
New metrics have been defined to better assess data center sustainability. Power Usage Effectiveness (PUE), introduced by The Green Grid in 2012, became the standard benchmark. It was later complemented by Carbon Usage Effectiveness (CUE), which measures operational carbon emissions, and Water Usage Effectiveness (WUE), which evaluates the water impact of cooling systems.
Since the formalization of the foundations of Green Cloud computing, several research works highlighted the need for co-optimization and a balanced approach across both application and infrastructure layers [4]. They further showed that eco-efficient resource allocation heuristics can reduce energy consumption by up to 30% without significant service degradation [4]. Since 2025, many organizations in the EU are subject to detailed sustainability reporting requirements, making accurate and automated carbon footprint measurement increasingly essential.
Some research works have focused on reducing this footprint through the optimized use of renewable energy [5]. One of the key challenges is dealing with the intermittency of such energy production. The authors in [5] analyzed the economic and environmental costs of different data center configurations, with or without renewable energy and energy storage systems. Moreover, in the context of a distributed edge-type infrastructure, they studied and evaluated several techniques (collaboration between data centers, adaptation of application quality of service, consolidation onto a reduced number of servers) to reduce energy consumption.
The main challenge in this work will be to develop new models that represent end-to-end service execution by linking transmission energy (dependent on network conditions and topology), computation energy (driven by workload characteristics and hardware efficiency), and infrastructure overhead such as cooling and renewable energy (varying with load and environmental conditions). To be effective, such models must be cross-layer, dynamic, and context-aware, incorporating temporal workload variations, heterogeneous resources, and the impact of task placement decisions. A key research question will also be to investigate the trade-offs between fine-grained models (e.g., per-task or per-function energy estimation) and global service-level models, in terms of accuracy, scalability, and usability for real-time optimization. The objective is to establish modeling approaches that are both sufficiently precise and practically tractable to support energy- and carbon-aware orchestration in sustainable 6G systems.
3) Access Network and Carbon-Aware networking
6G access networks will push performance into extreme regimes (sub-THz spectrum, ultra-dense deployments, AI-native control), and energy becomes a first-order constraint rather than an afterthought. The emergence of Software-Defined Networking (SDN), propelled by the introduction of OpenFlow in 2008 [6], opened the door to fine-grained and programmable control of network flows. Building on this paradigm, M. Al-Fares in [7] proposed Hedera in 2010, an adaptive flow scheduling system designed to mitigate congestion in fat-tree topologies by dynamically routing large flows. In the same year, [8] introduced ElasticTree, a pioneering approach to network energy management that leverages dynamic port standby, albeit with a focus on single-criterion optimization.
More recently, advances in carbon-aware networking have extended these ideas by integrating real-time carbon intensity signals into routing decisions, marking a significant step toward environmentally adaptive networks. In parallel, recent research on green orchestration has incorporated energy consumption models and carbon awareness into resource placement and scheduling strategies [9]. Approaches such as renewable-aware virtual network function placement and energy-efficient service chaining further highlight the growing potential of sustainability-driven orchestration [10,11], paving the way for more holistic and environmentally conscious network management.
Our objective in this thesis is to integrate access networks with energy-aware data center scheduling by designing allocation and orchestration mechanisms that jointly consider computation placement and network dynamics. In particular, we will develop solutions that shift tasks and workloads toward regions with abundant renewable energy, align traffic routing with data center load conditions, and exploit time-shifting for delay-tolerant services. At the same time, the framework will incorporate the joint optimization of fronthaul/backhaul usage and computation placement, ensuring that offloading decisions explicitly account for the energy cost of data transport across the access network. This includes leveraging traffic steering and load balancing at the RAN level to direct flows toward the most energy-efficient edge or cloud resources, enabling a coordinated, end-to-end energy-aware operation.
4) Energy Optimization through AI
A landmark study by [12] showed that deep reinforcement learning can reduce data center cooling energy consumption by up to 40%, demonstrating the potential of AI-driven control for improving infrastructure efficiency. Similarly, [13] estimates that AI could enable overall energy savings in data centers of 15% to 40%, depending on deployment conditions. However, AI itself incurs non-negligible energy costs for training and inference. A key research direction is therefore to quantify and optimize the trade-off between the energy savings achieved through AI-based control and the additional energy overhead required to operate such AI systems in future 6G edge-cloud environments. In addition, we aim to compare AI-based approaches with alternative optimization paradigms, including multi-objective optimization methods and game-theoretic frameworks such as coalition games, to assess their effectiveness in energy-efficient orchestration.
Research objectives:
Current architectures treat communication and computation separately: communication is optimized in the Radio Access Network, and computation is optimized in cloud/edge systems. This separation leads to suboptimal global energy efficiency, especially in energy-consuming AI-driven services (XR, autonomous systems) and data-intensive applications requiring real-time processing.
Therefore, we propose in this thesis to adopt a holistic approach that considers the full device-edge-cloud continuum and develop a global architecture and optimization framework that minimizes total energy consumption across device-edge-cloud systems.
Methodology
a) Architecture and system modeling
The energy-centric architecture that integrates edge, RAN, and cloud systems will be developed. The data center will be modeled as a dynamic, multi-layered system: application layer (VMs, containers, microservices), infrastructure layer (servers, racks, cooling), and network layer (topologies, optical links). Decision variables will include workload placement, routing path selection, selective/dynamic standby of equipment, and time scheduling based on the type of renewable energy available.
b) Algorithm design and multi-objective optimization
We aim in this thesis to design advanced optimization algorithms for edge-cloud systems that jointly address multiple conflicting objectives such as energy consumption, latency, and operational cost. In particular, we will explore multi-objective optimization techniques based on Pareto optimality to characterize trade-offs and identify efficient operating points across heterogeneous system configurations. In parallel, we will investigate AI-driven approaches, especially reinforcement learning, to enable adaptive and real-time decision-making under dynamic workloads and uncertain network conditions.
c) Simulation
We will evaluate the proposed architecture and algorithms using simulation tools such as CloudSim Plus, SimGrid, etc. We will consider several performance metrics (performance, network, and environmental).

Le profil recherché

Profil recherché : étudiant ayant une formation de master en réseaux informatique. Un profil ayant des compétences pratiques (liées aux environnements et architectures cloud sera apprécié).

Postuler sur le site du recruteur

Ces offres pourraient aussi vous correspondre.

Recherches similaires

L’emploi par métier dans le domaine Informatique à Paris