Research: User Fairness as a Quality of Service Problem
In networks, we tend to think of Quality of Service (QoS) relating primarily to classes of traffic. These classes of traffic, in turn, are grounded in application behavior driven by user expectations. For instance, users expect voice communications to be near real time so conversation can take place “normally,” which means delay must be held to a minimum. In order to provide support for the CODECs that make voice communication possible, jitter must be tightly controlled, as well; it is often better to drop a packet outside some jitter bounds than to deliver it. ‘Net neutrality, on the other hand, tends to see the key factor as access to a particular service.
In this diagram, assume Y and Z are two different video streaming services; A is streaming video from Y, while B is streaming from Z. The argument of ‘net neutrality is that the provider who runs the E to F link (or the network represented by that single link) should not be allowed to prefer the service at Y over Z (or the other way around). One of the basic problems with ‘net neutrality is the problem of not preferring one content provider over another is not as simple as it seems. For instance, what if Y is providing music streaming, while Z is providing video streaming? There does not seem to be a clear way to determine what is “neutral” in this case.
There is a second problem hiding in this neighborhood, however. Assume Y and Z are both serving video, A is streaming from Y, and B is streaming from Z. What happens if A is a head-end device for ten different users, all of whom are streaming different videos from Y? In this case, forcing the traffic from the two services to be equal would provide unequal service levels. It would seem better, in this case, to treat each user fairly, rather than each service. This is the “other side” of the QoS problem in terms of fairness.
The problem with ensuring this kind of fair resource sharing is the amount of state required in all the devices along the way. Rather than having per-application or per-service state, the network must somehow have per-user state. This would be massively expensive. The authors in the paper under review today, Fair Resource Sharing for Stateless-Core Packet Switched Networks with Prioritization, attempt to solve this problem.
To solve this problem, the authors build a congestion management system they call Activity Based Congestion management (ABC). ABC involves configuring the first hop nodes in the network with an activity monitor. In the sample network above, this would be routers C, D, G, and H. This new service allows each router to compute an activity level for each connected device, based on a baseline “fair” amount of activity level over a given time period. You can think of this activity level as something like a burst rate in a policer, only there is no action attached to the activity level; packets are simply marked.
Routers closer to the transit core, such as E and F, can then use these activity levels, combined with more standard type of service markings, to determine which traffic should be dropped when a queue backs up or overflows. The higher the activity level, the higher the chance a packet will be dropped in processing. This allows the network devices beyond the edge to be stateless; they do not need to know how the activity level was calculated, only how to compute the activity level into the various other packet drop mechanisms. The authors show that this method of determining which packets should be dropped can, in fact, allow fairness between users, rather than simply fairness between applications or classes of traffic. They implement a version of this system using Active Queue Management (AQM) mechanisms at network devices that take the activity level marking into account.
Whether or not this mechanism is practical “in the wild” is an exercise left up to the reader. On the other hand, the concept of fairness between users being given equal footing with fairness between classes of service, or applications, is interesting—an idea worth hanging on to, and possibly exploring further in the future.
Is very Useful Article, Thanks, @Russ White.