(英) |
In the realm of communication networks, ensuring accurate forecasts for the performance of TCP flows is essential to achieve high-quality
services. Recently, advancements in machine learning methods have occurred, leading to the emergence of strategies for forecasting TCP
throughput using centralized machine learning. However, these approaches for TCP throughput prediction lack the privacy protection
of Internet users and struggle to cope with a large amount of training data.Federated Learning (FL) is a novel decentralized machine learning
paradigm that was introduced in 2017, allowing multiple learning clients to collaboratively train the parameters of the global
model. We proposed the Federated Learning-based PERFormance predictor (FL-PERF) of TCP flows, which builds a global TCP throughput
prediction model using FL with multiple learning clients in a privacy-preserving manner in previous work. In different network
situations, it's important to clarify the connection between the number of clients taking part in federated learning and the number of
times they learn (the number of times model parameters are exchanged between server and client in federated learning) and the accuracy of
TCP throughput prediction. In this paper, we investigate the effects of network topology, the number of clients, and the number of learning
times on the prediction accuracy of the TCP throughput prediction model built by FL-PERF through experiments. |