CRG1: Machine Learning Approach to Slice Admission Control in Fifth Generation (5G) Wireless Network (CRG-WP2)
The fifth generation (5G) wireless communication has brought a new paradigm shift on how network cellular networks operate and how network resources are allocated. The prediction by industry players is that networks are becoming more dynamic and complex and the only way to overcome such challenges is to construct user-centric automatic and intelligent network systems. Automated and intelligent networking improve reliability and my also be used to optimize key requirements like revenue. Network slicing which involve virtualizing network resources and then bundling them as a function allow service providers to orchestrate these functions and then set user-tailored price limit per bundle then use this setup to perform slice access control also known as admission control. However, the complexity of interrogating these diverse slice requests, evaluating the network status then optimizing the admission process so as to provide the specific user-centric quality of service while improve revenue is still paramount. Further, the requirement to perform efficient slice scheduling, fair admission, resilience maintenance and reduce anomaly, exacerbates slice admission control problem. This problem presents a mixture of non-linear non integer problems that do not have a common solution. In this thesis we strive to find a set of solutions the employ reinforcement learning to tackle the problems.
Aims and Objectives
This study aims to tackle the problem of slice admission control with a focus on revenue optimization, taking into account both user and network objectives. Specifically, we address the challenges of maintaining slice admission fairness, efficient slice scheduling, network resilience maintenance, and profitability within an end-to-end 5G ecosystem. In contrast to existing literature, which often restricts investigation to slice admission using single-dimensional data solely for QoS maintenance, inter-slice and intra-slice congestion control, this thesis takes a broader perspective and considers multiple objectives to optimize revenue generation in a comprehensive manner. By exploring these additional dimensions, we aim to provide a more holistic and effective approach to slice admission control in 5G networks. The approach chosen in this investigation relies on formulation non-standard network slicing deviating from the known eMBB, uRLLC and mMTC slices while considering spatiotemporal user and network parameters
Anticipated Outcomes:
- To perform comprehensive complexity analysis for slice admission control optimization algorithms as a motivation for machine learning approach to slice admission control.
- To propose and formulate analytical expression for slice scheduling and analyze the performance considering resource and cost prediction over the selected period with Deep Q-Learning scheduler for multi-queue multi-server.
- To formulate the expression for slice admission control to implement fairness through slice auctioning and evaluate the performance through Q-learning.
- To develop and analyze slice admission control for resilient 5G network which improves network utility using a novel approach known as sequential twin-actor critic (STAC) in a multidimensional state space while efficiently adjusting throughput, computation and memory resources.
CRG2: Machine Learning-empowered Resource Management in 5G Network Slicing (CRG-WP1)
Research proposal and description:
5G networks are poised to revolutionize connectivity by offering capabilities such as ultra-reliable low latencies, massive machine-type communication, and enhanced mobile broadband connectivity. These capabilities cannot be achieved by a single homogeneous network; instead, they require multiple dedicated networks. However, creating a separate network infrastructure for each capability is economically unfeasible.
To address this challenge, 5G network slicing has been proposed as a solution. Network slicing allows a common physical network infrastructure to be divided into multiple, logical, and isolated networks, known as network slices. The creation of these network slices relies on the availability of networking resources (e.g., link bandwidth, optical wavelengths) and computing resources (e.g., memory, CPU, and storage). To accommodate demands for new network slices, infrastructure providers must ensure sufficient resources are available. Effective resource management is crucial in network slicing. This entails developing an automated operations and management platform to handle dynamic resource allocation, network monitoring, anomaly detection, resource adjustment, and threat detection.
Automating these functions is a complex task due to the increasingly intricate configuration of network slices, exacerbated by the dynamic nature and high volume of new connected devices and services. The performance measurement data generated by a sliced network is vast and complex, making manual processing impractical. Machine learning (ML) is essential for automating network operations, control, and management.
Aims and Objectives
The research aims to demonstrate the use of machine learning techniques for autonomous allocation and adjustment of computing and networking resources for network slices based on quality of service (QoS) requirements and time-varying network workloads. The objectives include:
- Dynamic Virtual Resource Allocation: Allocating virtual resources for network slices based on active users or service demand.
- Resource Adjustment: Adjusting resources based on current utilization and its impact on network performance.
Anticipated Outcomes:
We intend to build our demonstration on top of a fully-fledged 3GPP-compliant 5G testbed, equipped with network function sharing capabilities. This study intends to integrate and extend one of the prominent orchestrators with overbooking intelligence, by implementing an optimal ML-based overbooking algorithm on top of the admission control engine.
CRG3: Design and Implementation of a dynamic and autonomous 5G Gi-LAN (CRG-WP1)
Research proposal and description:
The Gi-LAN interface in the 5G core network, as defined by the Third Generation Partnership Project (3GPP) standards, is crucial for managing network traffic to ensure the desired policy and service-level agreements. This interface steers classified subsets of network traffic to specific chained network functions for processing. To efficiently meet the increasing demands for diverse services, Gi-LAN deployments require optimization of infrastructure costs and enhanced network agility.
Despite ongoing efforts by 3GPP, the International Telecommunication Union (ITU), and other bodies to develop technical specifications and recommendations for the 5G network to address rising traffic demands and diversity, limited work has focused on the Gi-LAN (referred to as N6-LAN in 5G). This research aims to address this gap by focusing on the Gi-LAN in the envisioned 5G network.
The research begins by assessing the current state of Gi-LAN deployments with respect to infrastructure cost optimization and network agility, identifying the factors influencing these aspects. A comprehensive literature review examines related work on infrastructure cost optimization and network agility that could be integrated into Gi-LAN deployments.
The central hypothesis of this thesis posits that network agility for the Gi-LAN can be achieved through autonomous and dynamic service function chaining, while infrastructure cost optimization can be realized by employing network function virtualization (NFV). The focus is particularly on minimizing the performance costs associated with virtualization, including compute resource usage, latency, jitter, and packet loss.
Aims and Objectives
The primary aim of this project is to design and implement a Gi-LAN that enables the 5G core network to deliver various network services at appropriate policy and service-level requirements, suitable for current and emerging use cases, while maintaining acceptable performance levels. The objectives include:
- Infrastructure Cost Optimization: Utilize network function virtualization to reduce the costs associated with physical infrastructure. This involves focusing on the minimization of compute resource usage and addressing performance issues such as latency, jitter, and packet drops.
- Network Agility: Enhance the agility of the network by implementing autonomous and dynamic service function chaining. This allows the network to adapt to varying service demands and efficiently manage network traffic.
- Performance Evaluation: Develop methods to evaluate the performance of the virtualized Gi-LAN, ensuring it meets the desired service-level agreements and policies. This includes assessing the impact of virtualization on network performance and finding ways to mitigate any negative effects.
- Integration of Emerging Technologies: Explore the incorporation of emerging technologies and methodologies from related fields to improve the efficiency and effectiveness of Gi-LAN deployments. This could include advancements in machine learning, automation, and optimization techniques.
By achieving these objectives, the research aims to create a Gi-LAN that not only supports the 5G core network in delivering diverse services efficiently but also ensures that these services meet the stringent performance requirements of modern and future network applications. This will involve a detailed analysis of current technologies, the development of new methods for virtualization and function chaining, and thorough testing to validate the proposed solutions. The ultimate goal is to enhance the overall performance and cost-effectiveness of Gi-LAN deployments in 5G networks, paving the way for more robust and scalable network infrastructure.
Anticipated Outcomes:
- An enhanced Application Function (AF) which accepts external requests with information for the construction of influence data which is used for trac steering policy.
- An enhanced Session Management Function (SMF) with an added feature to install FAR(s) that perform NSH encapsulation, adding traffic steering policy identifier as the Service Path Identifier (SPI) in the NSH.
- A developed SFC MANO which receives requests to dynamically create service chains. The MANO has corresponding interfaces to the NWDAF and the DPI functions and makes autonomous decisions on SFC. The MANO has another interface towards the AF for sending information for the construction of influence data.
- A developed SFC data plane with a data path that supports NSH routing. The data path implements the routing on a virtualized platform with minimal virtualization cost. The data path performs routing in a way that minimises signalling between the data path and the target VNFs.
CRG4: Reinforcement Learning-Based Adaptive Haptic Feedback for 5G-Powered Telesurgery (CRG-WP1)
Research proposal and description:
The modern 5G networks offer enhanced capabilities such as ultra-low latency, ultra-high bandwidth, and improved reliability, making them a game-changer in diverse use cases. One such use case is telesurgery, which will allow skilled surgeons to perform specialized and advanced robotic-aided surgeries remotely in areas with limited resources and medical expertise, such as rural and underdeveloped regions. However, the realization of these systems is limited by a lack of supporting technologies and unreliable communication network infrastructure. For instance, most existing telesurgery systems are limited to operation within one health facility and rely only on audio and visual feedback during an operation. However, achieving high-fidelity and collaborative remote operation requires a reliable communication network and adequate involvement of human senses beyond audio and visual data. This can be done by adopting 5G networks and integrating haptic feedback, i.e., the sense of touch conveyed through force, vibration, or motion, providing users with a realistic and immersive experience in telepresence scenarios, respectively. Additionally, since haptic feedback is delay-stringent, adaptive algorithms must be incorporated to dynamically adjust haptic feedback perception based on various fluctuating factors, such as network conditions and responsiveness of the end devices. Therefore, novel machine learning-based algorithms to automatically adapt haptic feedback in 5G-enabled telesurgery and respond to real-time network conditions without human judgement will enhance timely and accurate responses by surgeons.
Aims and objectives:
The aim of this project is to develop and assess RL-based adaptive haptic feedback algorithms for 5G-powered telesurgery use-case for optimized user experience and adaptability under varying network conditions.
Anticipated Outcomes:
The outcome of this research lies in the application of reinforcement learning (RL) to develop adaptive haptic feedback algorithms that can optimize feedback in collaborative telesurgery systems based on real-time network conditions. By leveraging the capabilities of RL, the proposed algorithms can adapt more effectively to the uncertain and dynamic nature of network conditions, providing a consistent perception of force against the patient’s skin and improving the surgeon’s experience. This approach differs from traditional rule-based or model-based adaptation methods that are inconsistent and may lead to errors. RL-based methods allow the algorithm to learn and improve its performance through interaction with the environment, making it more robust and adaptable to different scenarios and user preferences.
CRG5: Intelligent Traffic Classification of Network Slices for Next Generation Mobile Networks (CRG-WP1)
Research proposal and description (500 words maximum):
The concept of network slicing offers significant opportunities, but its successful implementation faces challenges, particularly in effectively classifying network traffic within each slice. Traffic classification is crucial for ensuring that allocated resources match the specific needs of applications or services using a particular slice. This proposal aims to explore innovative approaches to enhance the traffic classification’s accuracy and efficiency for network slicing. It will focus on dynamic resource allocation strategies that achieve cost-effective connectivity, moving away from expensive fixed Service Level Agreement (SLA) contracts. Additionally, considering device terminals often experience battery outages, the proposal will also integrate traffic classification with energy harvesting techniques to improve selection in network slicing. Furthermore, it will analyze what optimization strategy of traffic classification and energy harvesting can improve network performance, energy efficiency, and quality of service.
Aims and Objectives
- Investigating the existing methods of traffic classification within network slicing that deal with encrypted data like the SPI algorithms and compare accuracy with the likes of OpenDPI, nDPI, NFStream and L-7 filter.
- Propose and evaluate novel techniques to enhance the accuracy and efficiency of traffic classification of encrypted data.
- Analyzing the impact of improved traffic classification on the overall performance of network slicing and compare with the slicing solutions like 5G OpenAirInterface (OAI) [41], free5GC [42] and Amarisoft.
- Provide insights and recommendations for the practical implementation of advanced traffic classification algorithms in real-world network slicing scenarios.