Alberto
del Río Ponce

Originally as a Telecommunications Engineer and Reinforcing (my) Learning through research in the field of Artificial Intelligence. P.S. I'm not a robot. Really? PhD in progress.

Alberto del Río Ponce

I am a Technical Researcher based on Madrid. I was graduated in Mobile and Space Communications Engineering from the Universidad Carlos III of Madrid (UC3M), and I obtained a Master‘s Degree in Signal Processing and Machine Learning for Big Data at the Universidad Politécnica of Madrid (UPM).

My Degree’s Thesis was an analysis of IPv6 Multicast traffic in mobility, while the Master’s Thesis was based on Deep Reinforcement Learning optimization algorithms to guarantee the QoE for multimedia reproduction on a 5G environment.

Professionally I worked during 2018 at Deutsche Telekom (Berlin) in the specification of the standard of 5G telecommunications system, Release 16. Specifically in the development of a system framework concept focused on cloud services. Currently, I am working within the Grupo de Aplicación de Telecomunicaciones Visuales (GATV) of the Universidad Politécnica of Madrid (UPM) in dedicated projects on 5G communications networks, health services and energy environments with the help of use cases focused on Artificial Intelligence.

This last professional experience is on which my PhD Thesis is driven, researching in the intelligent optimization of virtualized services in new generation networks.

Education
  • PhD in Communications Technologies and Systems

    PhD's thesis: Pending

  • M.S. in Signal Processing and Machine Learning for Big Data

    Master's thesis: 5G Media QoE Optimization based on Reinforcement Learning algorithms (A2C)

  • B.S. in Mobile and Space Communications Engineering, Telecommunication Engineer

    Bachelor's thesis: Analysis of IPv6 Multicast Traffic in a Network Emulator

Portfolio

Featured Projects

  • 5G Multimedia QoE Optimization

    5G Telecommunications networks have transformed the current industry landscape at the level of service and application possibilities. Its improvements over the previous generation create previously inefficient use cases at production levels, such as audiovisual broadcasting of live content.

    One of the most powerful and studied fields in recent years are Deep Learning and Reinforcement Learning. In general, the first is responsible for simulating neural networks to achieve greater efficiency in training Machine Learning models, while the second seeks to predict what actions an agent should take, maximizing the reward received. The combination of both areas causes an algorithm called Advantage Actor Critic (A2C).

    The A2C algorithm is developed during this project, being trained through a live television signal. The system will offer the current bitrate parameters to the model, and through iterative training, the model will learn to configure the optimal settings for the state in which both the transmission and the network are based on the rewards received. This method combines the fields of Deep Learning and Reinforcement Learning since it uses neural networks for creation for both the Actor and the Critic; and second, because the algorithm itself

  • Song’s Genre prediction based on Lyrics

    This is a personal project to predict the genre of specific songs thanks to a model previously trained focusing on lyrics.

    The code use two datasets, the big one focused on training the model ('lyrics.csv'), and the second one and smaller ('songdata.csv'), to predict some genre's outside the dataset training. To facilitate the execution, much of the code has been commented, as in the case of the data cleaning part, with the data available in the respective csv that are loaded throughout the code. You can download the dataset in the the next URL:

    https://drive.google.com/file/d/1E4NSo087MvP6DCzVu22BR7Bposc0inLy/view?usp=sharing (Download and extract it in a folder called datasets)

    The project is composed by two files, differing in the number of genres included in the training. The first of them ('genre_all.py') has the complete set of genres totalizing 9, while the other ('genre.py') used 5 genres because some of them are very similar to each other, whereby the accuracy of the predictions decreased.

    The model shows the probabilities of being part of each genre, and in order to visualize the behaviour of the data, we developed a correlation matrix and a TSNE visualization to test results. Finally, it is offered a playlist for the same predicted genre.

  • Recommender System for Books

    This project creates a recommender system for books. It is based on Collaborative Filtering.

    The dataset provided is from Goodreads, a "social cataloging" website that allows to search in its database some books, annotations and reviews. It can be downloaded from here: https://github.com/zygmuntz/goodbooks-10k

Experience

Professional Background

  • January 2021 Ongoing


    Artificial Intelligence Researcher

    Technical University of Madrid

    - Research and implementation of automation services using Machine Learning, from basic models to complex techniques such as Deep Learning and Reinforcement Learning.

    - Collaboration in various national and European projects in the field of telecommunications, media, energy and health, using Artificial Intelligence.

  • July 2018 December 2018


    Functional Researcher

    Telekom Innovation Laboratories

    - Project on the specification, standardization, and implementation of a 5G Service-based Architecture (SBA).

    - Development of new ideas and specifications for the network services and the overall cloud-native framework/system concept.

    - Participation in the discussion with the standardization delegates and the preparation of contributions for standardization bodies, mainly 3GPP SA2, Release 16.

  • - End to End Test Management.

    - Design of test plans following Agile methodologies (Scrum), using JIRA as a tool for the management and monitoring of errors.

    - Quality assurance in IoT systems (backend test, test automation and devices testing).

  • May 2015 April 2016


    Deployment Manager

    Nokia

    - Deployment and development of implementation plans for specific sub-projects.

    - Management and control of the implementation of networking activities.

    - Establishment and daily projects KPIs report.

Research

Scientific publications

  • December 2022

    A Multi-Port Hardware Energy Meter System for Data Centers and Server Farms Monitoring

    Nowadays the rationalization of electrical energy consumption is a serious concern worldwide. Energy consumption reduction and energy efficiency appear to be the two paths to addressing this target. To achieve this goal, many different techniques are promoted, among them, the integration of (artificial) intelligence in the energy workflow is gaining importance. All these approaches have a common need: data. Data that should be collected and provided in a reliable, accurate, secure, and efficient way. For this purpose, sensing technologies that enable ubiquitous data acquisition and the new communication infrastructure that ensure low latency and high density are the key. This article presents a sensing solution devoted to the precise gathering of energy parameters such as voltage, current, active power, and power factor for server farms and datacenters, computing infrastructures that are growing meaningfully to meet the demand for network applications.

    The designed system enables disaggregated acquisition of energy data from a large number of devices and characterization of their consumption behavior, both in real time. In this work, the creation of a complete multiport power meter system is detailed. The study reports all the steps needed to create the prototype, from the analysis of electronic components, the selection of sensors, the design of the Printed Circuit Board (PCB), the configuration and calibration of the hardware and embedded system, and the implementation of the software layer. The power meter application is geared toward data centers and server farms and has been tested by connecting it to a laboratory server rack, although its designs can be easily adapted to other scenarios where gathering the energy consumption information was needed.

    The novelty of the system is based on high scalability built upon two factors. Firstly, the one-on-one approach followed to acquire the data from each power source, even if they belong to the same physical equipment, so the system can correlate extremely well the execution of processes with the energy data. Thus, the potential of data to develop tailored solutions rises. Second, the use of temporal multiplexing to keep the real-time data delivery even for a very high number of sources. All these ensure compatibility with standard IoT networks and applications, as the data markup language is used (enabling database storage and computing system processing) and the interconnection is done by well-known protocols.

  • October 2022

    A Deep Reinforcement Learning Quality Optimization Framework for Multimedia Streaming over 5G Networks

    Media applications are amongst the most demanding services. They require high amounts of network capacity as well as computational resources for synchronous high-quality audio–visual streaming. Recent technological advances in the domain of new generation networks, specifically network virtualization and Multiaccess Edge Computing (MEC) have unlocked the potential of the media industry. They enable high-quality media services through dynamic and efficient resource allocation taking advantage of the flexibility of the layered architecture offered by 5G.

    The presented work demonstrates the potential application of Artificial Intelligence (AI) capabilities for multimedia services deployment. The goal was targeted to optimize the Quality of Experience (QoE) of real-time video using dynamic predictions by means of Deep Reinforcement Learning (DRL) algorithms. Specifically, it contains the initial design and test of a self-optimized cloud streaming proof-of-concept. The environment is implemented through a virtualized end-to-end architecture for multimedia transmission, capable of adapting streaming bitrate based on a set of actions. A prediction algorithm is trained through different state conditions (QoE, bitrate, encoding quality, and RAM usage) that serves the optimizer as the encoding values of the environment for action prediction. Optimization is applied by selecting the most suitable option from a set of actions. These consist of a collection of predefined network profiles with associated bitrates, which are validated by a list of reward functions. The optimizer is built employing the most prominent algorithms in the DRL family, with the use of two Neural Networks (NN), named Advantage Actor–Critic (A2C).

    As a result of its application, the ratio of good quality video segments increased from 65% to 90%. Furthermore, the number of image artifacts is reduced compared to standard sessions without applying intelligent optimization. From these achievements, the global QoE obtained is clearly better. These results, based on a simulated scenario, increase the interest in further research on the potential of applying intelligence to enhance the provisioning of media services under real conditions.

  • August 2022

    A Topological and performance metrics approach to the decision making of Content Delivery Networks

    Actual networks are designed as multi-service, where multiple applications with different requirements and constraints are supported on top of a common infrastructure. In this situation, both networks and applications, in their majority, work in a decoupled manner in the sense that networks are not aware of the applications’ properties, nor the applications have sufficient information about the actual network circumstances. This implies suboptimal solutions at both application and network level in order to guarantee some service levels (e.g., by over dimensioning network and computing resources) or to exercise some adaptation (e.g., by inferring network status [1]).

    Such an approach is certainly insufficient and inefficient, especially when facing future applications with more stringent requirements (high bandwidth, low latency) for services such as gaming, augmented reality, metaverse, etc. It is then necessary to define new mechanisms that could provide an efficient channel of information between applications and networks to expose capabilities that could permit optimizing service delivery.

    One of the foreseen mechanisms to assist in that direction is the Application Layer Traffic Optimization (ALTO) [2]. ALTO provides topological information of the network with associated metrics which can allow the application to have an up-to-date view of network status, facilitating informed decisions from the application logic at the time of delivering traffic.

    This paper exemplifies how topological information complemented with performance metrics can be leveraged as a highly valuable source of information for optimization purposes when integrating it with the logic of the application. Section 2 explains the initial concepts relative to network information. Section 3 exposes the test case under this article is developed. In section 4 we show the setup where we install all the required hardware. Section 5 offers initial insights about the tools used to generate the different tests. Section 6 is compose by the different graphics to validate the data. Finally, section 7 expose the test case, including an optimizer to demonstrate the initial assumptions.

  • January 2022

    Design, Implementation, and Validation of a Multi-Site Gaming Streaming Service Over a 5G-Enabled Platform

    Multi-Media applications are amongst the most demanding services, requiring high amounts of network capacity and computational resources for synchronous audio-visual streaming with high quality. Gaming streaming is one of the most demanding of these media applications. Recent technological advances in the 5G domain, precisely Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC), unlock media industries' potential by offering high-quality media services through dynamic and efficient resource allocation with low latency.

    This work presents a multi-site gaming streaming use case implemented over an end-to-end 5G-enabled platform provided by the EU H2020 5G-PPP 5G EVE project. Furthermore, we present the executed use case experiment scenarios to validate the use case performance following a set of defined Quality of Experience(QoE)-related Key Performance Indicators (KPIs). In particular, this paper discusses the design workflow and orchestration of the multi-site gaming streaming use case across two 5G EVE sites (i.e., Spain and Greece), providing a detailed description of the network function applications and resources utilized for the use case.

    Subsequently, leveraging the 5G EVE monitoring platform, this paper elaborates on the executed experiment scenarios that provide the defined KPI metrics data. Finally, this paper presents, discusses, and analyzes the obtained KPI metrics data results and provides recommendations on future works for these kinds of use cases.

  • January 2020

    5G Multimedia QoE Optimization based on Deep Reinforcement Learning algorithms (A2C)

    Many real-world systems require a complex abstraction of the long-term consequences of a specific configuration, as well as the actions you take on them. Reflecting the configuration of these systems implies a response accordingly, which, properly coded, could be treated as a Reinforcement Learning problem.

    Two of the most powerful fields in recent years are Deep Learning and Reinforcement Learning. In general, the first is responsible for simulating Neural Networks to achieve greater efficiency in training models, while the second seeks to predict what actions an agent should take, maximizing the reward received. One of the algorithms that combines both disciplines is Advantage Actor-Critic (A2C).

    Due to the innovations produced in the telecommunications sector with the standardization of the new 5G networks, new business opportunities are appearing, being one of the most outstanding multimedia content control under a 5G network. Thanks to the innovations of these networks, it is possible to send content over long distances with low latency, with one of the challenges to guarantee a Quality of Experience (QoE) through automated control of intermediate processes.

    This research project focuses on the development of the A2C algorithm using Deep Reinforcement Learning techniques, for the automation of the bitrate control of a multimedia transmission, focusing efforts on guaranteeing a QoE. A set of components for the streaming handling is recreated, obtaining various metrics to feed the encoded states of the model, using the actions that the model will predict in real-time to configure the maximum bitrate to transmit. To assess the performance of the training, several reward functions are developed, focusing the most important on the Mean Opinion Score (MOS) evaluated by a quality probe acting in the viewing position by an end-user.

    The evaluation of the developed models included the development of various Test Cases in which we modified the reward functions to emphasize the importance of the available metrics, including inquiries about the challenge of real-time training. The thesis concluded with a discussion on the potential directions to take in future research, as well as possible extensions in system optimizations.

  • February 2018

    Mobility study for IP Multicast traffic in a network emulator

    The proliferation of scenarios with massive numbers of mobile nodes in the network creates a research opportunity due to the necessity of solutions in the field of mobility. In this way, the aim of this project is to tackle this issue carrying out a study of the multicast traffic exchanged in mobile nodes. Without using a specific solution, we extract information from a mobility solution (PMIP) for the achievement of the objective.

    Throughout the memory are detailed the various tools with which the project is developed, based on a network emulator called CORE and the protocols implementations that allow traffic to be transmitted. For the data transmission, itis necessary to study the unicast routing protocols. In this case, both Routing Information Protocol (RIP) and Open Shortest Path First (OSPF) and their versions adapted to IPv6 addresses have been thoroughly studied. In order to get multicast traffic, a multicast routing protocol, Protocol Independent Multicast (PIM), is studied, which allows nodes to subscribe to multicast groups and receive traffic of this type. The routing protocols are obtained thanks to tools that have them implemented internally, such as Pimb, XORP and MRD6.

    Finally, we check the functionality of multicast traffic reception in mobility, and the results obtained are bad. According to the test, delays transmissions in mobility are up to one minute due to the sending frequency of MLD messages. We show future studies and research, among them, a possible improvement in MLD messages transmissions. Finally, we study possible ways of market expansion. However, we emphasize the fact that this thesis is a mere tool for mobility and multicast traffic, with no short-term profit motivation.