Measuring broadband performance using M-Lab: Why averages tell a poor tale

Paper 2015

Authors: Xiaohong Deng, Jordan G Hamilton, Jason Thorne, Vijay Sivaraman

Broadband network performance is multi-faceted: it varies by ISP, by content source, by household connection, and by time-of-day. Daily or monthly averages, as published by content providers such as Netflix and Google, do not convey the full picture. In this paper we leverage M-Lab, the world's largest open measurement platform, to characterize broadband performance across Australian households. Our study delves into millions of data samples collected from 96,882 households over four months, and looks beyond averages to make several interesting observations: 1) There is considerable variation amongst households, in terms of their broadband speeds and variability of network performance within a day and across days, and this information is lost when data is averaged across houses; 2) The fluctuations (even for a specific house) are significant, and can exhibit unexpected patterns, such as wide variations from one day to the next, and some clusters of outliers at certain times of the day. 3) By our experimental results, we conclude that neither aggregating by household nor aggregating by day or by hour is a sound measurement strategy. Moreover, our study sheds new perspectives on broadband evaluation by using M-Lab data, and can inspire future study into the underlying reasons of performance variation.

Resources

External Resources

Related Publications

M-Lab: User Initiated Internet Data for the Research Community

Paper 2022

Phillipa Gill, Christophe Diot, Lai Yi Ohlsen, Matt Mathis, and Stephen Soltesz

on which researchers have deployed measurement tools. Its mission is to measure the Internet, save the data and make it universally accessible and useful. This paper serves as an update on the MLab platform 10+ years after its initial introduction to the research community. Here, we detail the current state of the M-Lab distributed platform, highlights existing measurements/data available on the platform, and describes opportunities for further engagement between the networking research community and the platform.

The importance of contextualization of crowdsourced active speed test measurements

Paper 2022

Udit Paul, Jiamo Liu, Mengyang Gu, Arpit Gupta, Elizabeth Belding

Crowdsourced speed test measurements, such as those by Ookla® and Measurement Lab (M-Lab), offer a critical view of network access and performance from the user's perspective. However, we argue that taking these measurements at surface value is problematic. It is essential to contextualize these measurements to understand better what the attained upload and download speeds truly measure. To this end, we develop a novel Broadband Subscription Tier (BST) methodology that associates a speed test data point with a residential broadband subscription plan. Our evaluation of this methodology with the FCC's MBA dataset shows over 96% accuracy. We augment approximately 1.5M Ookla and M-Lab speed test measurements from four major U.S. cities with the BST methodology. We show that many low-speed data points are attributable to lower-tier subscriptions and not necessarily poor access. Then, for a subset of the measurement sample (80k data points), we quantify the impact of access link type (WiFi or wired), WiFi spectrum band and RSSI (if applicable), and device memory on speed test performance. Interestingly, we observe that measurement time of day only marginally affects the reported speeds. Finally, we show that the median throughput reported by Ookla speed tests can be up to two times greater than M-Lab measurements for the same subscription tier, city, and ISP due to M-Lab's employment of different measurement methodologies. Based on our results, we put forward a set of recommendations for both speed test vendors and the FCC to con-textualize speed test data points and correctly interpret measured performance.

The ukrainian internet under attack: an NDT perspective

Paper 2022

Akshath Jain, Deepayan Patra, Peijing Xu, Justine Sherry, Phillipa Gill

On February 24, 2022, Russia began a large-scale invasion of Ukraine, the first widespread conflict in a country with high levels of network penetration. Because the Internet was designed with resilience under warfare in mind, the war in Ukraine offers the networking community a unique opportunity to evaluate whether and to what extent this design goal has been realized. We provide an early glimpse at Ukrainian network resilience over 54 days of war using data from Measurement Lab's Network Diagnostic Tool (NDT). We find that NDT users' network performance did indeed degrade - e.g. with average packet loss rates increasing by as much as 500% relative to pre-wartime baselines in some regions - and that the intensity of the degradation correlated with the presence of Russian troops in the region. Performance degradation also correlated with changes in traceroute paths; we observed an increase in path diversity and significant changes to routing decisions at Ukrainian border Autonomous Systems (ASes) post-invasion. Overall, the use of diverse and changing paths speaks to the resilience of the Internet's underlying routing algorithms, while the correlated degradation in performance highlights a need for continued efforts to ensure usability and stability during war.