Ono and ISP Coziness

Some of you may have seen the coverage that Ono picked up today because of it’s ability to optimize P2P transaction speeds by enabling more topologically optimal distribution – all while requiring no interaction with the ISP. On one hand, I’m happy about this, as the whole P4P thing, and the topology intelligence dependence doesn’t seem a viable long-term option. However, given where the bottlenecks are in the current system, Ono leaves some room for concern as well.

Specifically, in measurements we’ve seen the peak to trough bandwidth on the fixed broadband access edge in both cable and DSL is around 4x peak-trough (although the x value itself isn’t particularly relevant to this discussion). So, for example, if there were 1Gbps peak utilization, trough utilization would be around 250Mbps. Given that ISPs during the capacity planning process need to plan for peak loads, they’ll typically engineer capacity expansion with peak loads of 1Gbps, plus some variable that accommodates incremental bandwidth utilization increases based on historical growth, as well as projected new markets, subscriber acquisition, etc..

Needless to say, much of this peak load is driven by P2P and other protocols. So, when folks come up with solutions for improving P2P transfer rates, for example, by a professed 207% as with Ono, that 1 Gbps might now be 2.1 Gbps, and the peak-trough ratio may now be 6x or 8x, versus 4x. Arguably, this exacerbates the problem where it’s most obvious, in the access network, and in particular, in the cable access network where downstream bandwidth is shared among multiple subscribers. Now, given these peak burst rates, ensuring fairness among users of the network is even more critical.

Other applications have improved transactions rates considerably as well. For example, most browsers opening multiple connections to web servers in order to download multiple images and text in parallel, or iTunes and Google Maps opening tens of connections in order to obviate TCP’s inherent per-session (v. per-user) congestion control mechanisms in order to optimize aggregate transactions rates. When your single smtp (email) connection is contending for network resources with 300 TCP connections from your neighbors ‘optimized’ application, ensuring fairness among subscribers by the ISP is critical IF contention for resources exists, in particular those of access loop capacities.

The implications of this aren’t felt just on the network from an available bandwidth and packet forwarding quality of service perspective, but also, by devices like NATs and stateful firewalls that need to track all of these connections. Applications that arguably exploit TCP’s per-session Internet congestion-friendly characteristics in order to optimize the local user’s experience are becoming more and more common. More focus on fariness across users, as opposed to fairness across transport connections, is sure to be a critical issue in transport protocol design and network architecture in the coming months.

I believe that if Ono-like solutions enable topologically optimal distribution of content, that’s a good thing. However, there will always be a bottleneck, and ensuring it’s in the place that scales best and is most manageable is critical.

Comments are closed.