Reproducing Evaluation Results

Abstract

Performance Evaluation of Containers for Low-Latency Packet Processing in Virtualized Network Environments

Packet processing in current network scenarios faces complex challenges due to the increasing prevalence of requirements such as low latency, high reliability, and resource sharing. Virtualization is a potential solution to mitigate these challenges by enabling resource sharing and on-demand provisioning; however, ensuring high reliability and ultra-low latency remains a key challenge. Since bare-metal systems are often impractical because of high cost and space usage, and the overhead of virtual machines (VMs) is substantial, we evaluate the utilization of containers as a potential lightweight solution for low-latency packet processing. Herein, we discuss the benefits and drawbacks and encourage container environments in low-latency packet processing when the degree of isolation of customer data is adequate and bare metal systems are unaffordable. Our results demonstrate that containers exhibit similar latency performance with more predictable tail-latency behavior than bare metal packet processing. Moreover, deciding which mainboard architecture to use, especially the cache division, is equally vital as containers are prone to higher latencies on more shared caches between cores. We show that this has a higher impact on latencies within containers than on bare metal or VMs, resulting in the selection of hardware architectures following optimizations as a critical challenge. Furthermore, the results reveal that the virtualization overhead does not impact tail latencies.

This page contains all scripts, resources, and information needed to reproduce or further evaluate the data from the Paper Florian Wiedner, Max Helm, Alexander Daichendt, Jonas Andre, Georg Carle "Performance Evaluation of Containers for Low-Latency Packet Processing in Virtualized Network Environments", published in PEVA (Performance Evaluation Journal).

All scripts and precompiled data can be found in the following repository: https://github.com/wiednerf/containerized-low-latency/tree/main , all raw data from analyzed scenario 1 can be found in the following repository https://doi.org/10.14459/2023mp1718824, all raw data from scenario 2 can be found under https://doi.org/10.14459/2024mp1736868.