Labeled Network Stack is promising to improve user experience of network interactive services while maintaining high resource

Intelligent Computing

Network interaction has become ubiquitous with the development of the information age and has penetrated into every field of our lives, such as cloud gaming, web searching, and autonomous driving, promoting human progress and providing convenience to society. However, the growing number of clients also caused some problems affecting the user experience. Online services may not respond to some users within the expected timeframe, which is known as high tail latency. In addition, the server's bursty traffic exacerbates this issue.

In order to solve this problem and improve computer performance, researchers must constantly optimize network stacks. At the same time, Low entropy cloud (i.e., low interference among workloads and low system jitter) is becoming a new trend, where the Labeled Network Stack (LNS) based server is a good case to gain orders of magnitude performance improvement compared to servers based on traditional network stacks. Thus, it is essential to conduct a quantitative analysis of LNS in order to reveal its benefits and potential improvements.

Wenli Zhang, researcher at State Key Laboratory of Processors, Institute of Computing Technology and co-authors of this study said: "Although prior experiments have demonstrated that LNS can support millions of clients with low tail latency, compared with mTCP, a typical user-space network stack in academia, and Linux network stack, the mainstream network stack in industry, a thorough quantitative study is lacking in answering the following two questions:

(i) Where do the low tail latency and the low entropy of LNS mainly come from, compared with mTCP and Linux network stack?

(ii) How much LNS can be further optimized?"

In order to answer the above questions, an analytical method based on queueing theory is proposed to simplify the quantitative study of the tail latency of cloud servers. In the Massive-Client Scenario, Zhang and co-authors establish models characterizing the change of processing speed in different stages for an LNS-based server, an mTCP-based server, and a Linux-based server, with bursty traffic as an example. In addition, authors derive the formulas for the tail latency of the three servers.

"Our models 1) reveal that two technologies in LNS, including the fulldatapath prioritized processing and the full-path zero-copy, are primary factors for high performance, with orders of magnitude improvement of tail latency as the latency entropy reduces maximally 5.5 × over the mTCP-based server, and 2) suggest the optimal number of worker threads querying a database, improving the concurrency of the LNSbased server 2.1 × –3.5 × ." Zhang said, "The analytical method can also apply to the modeling of other servers characterized as tandem stage queueing networks."

This work is supported in part by the National Key Research and Development Program of China (2016YFB1000200), and the Key Program of the National Natural Science Foundation of China (61532016).

Article Reference: Hongrui Guo, Wenli Zhang, Zishu Yu, Mingyu Chen, "Queueing-Theoretic Performance Analysis of a Low-Entropy Labeled Network Stack", Intelligent Computing, vol. 2022, Article ID 9863054, 16 pages, 2022. https://doi.org/10.34133/2022/9863054

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.