Blunt Instruments

In his written statement to the Senate Committee on Commerce, Science and Transportation on Tuesday of this week, FCC Chairman Kevin Martin referred to a particular method of traffic management as a “blunt means to reduce peer-to-peer traffic by blocking certain traffic completely.”

The blunt means was referring to how some Deep Packet Inspection (DPI) platforms manage traffic when placed out of line. If a device is out of line, then one of the ways to control traffic is to directly or indirectly signal the sender to slow down or terminate the communication session. Terminating a session is the low hanging fruit because:

1) It’s easy to send a TCP reset message to the sender;

2) How harmful can it be to reset a peer to peer connection, most peer to peer file sharing clients have hundreds of sessions open!

For DPI, out of line versus in line has been an ongoing debate. Overall, out of line DPI is easier to deploy in a network, and easier to engineer as a product. Because out of line placements “tap” into the optical links or receive a mirrored copy of the packet from another network device, there is less risk in inserting into the network. The DPI processing is not in the critical service delivery path of subscriber traffic. If the DPI element fails, it is not service-effecting. “Hey the out of line DPI element crashed! No worries! The subscriber’s real packets are somewhere else!” By being out of line, there are significantly less performance constraints to engineer into the product because the packet seen by the DPI is only a copy. “One half second of latency? No worries! Dropped packets? No worries!”

Out of line is not the only traffic management option, as FCC Chairman Martin, in his written statement, alludes “… more modern equipment can be finely tuned to slow traffic to certain speeds based on various levels of congestion.” This type of finely tuned traffic management requires DPI to be placed in line. In line DPI can gently slow aggressive peer to peer applications during periods of congestion, while simultaneously ensuring that every active subscriber has an equal share of the bandwidth.  In line DPI can promote fairness, while still providing the freedom of subscribers to access content and services of their choice.

Traffic management made possible by in line DPI should resonate with the Chairman. Why? In 2005 the FCC established four principles to promote the open and interconnected nature of the public Internet:

• Consumers are entitled to access the lawful Internet content of their choice;
• Consumers are entitled to run applications and use services of their choice, subject to the needs of law enforcement;
• Consumers are entitled to connect their choice of legal devices that do not harm the network;
• Consumers are entitled to competition among network providers, application and service providers, and content providers.

The Commission also noted that these principles were subject to reasonable network management, so it seems to support in line traffic management, under periods of congestion, which ensures fair access to available bandwidth, content, and services without blocking or denying service.

But, deploying DPI in line is a whole different ball game. Every “live” packet of every subscriber is directly processed by the DPI platform. In line placement has to be certified for being in the service delivery path of subscribers; it has to have failover and redundancy so there is no service loss in case of failure, and it has to have low latency and no dropped packets in order to preserve the service quality of real time applications like voice and video. However, engineering this type of traffic management, say at 10G Full Duplex bandwidth rates, with low latency, no packet loss, and full failover capability is both costly and difficult. But more importantly, it takes a certain mindset and discipline for in line DPI, as evidenced by the European Advanced Networking Test Center AG (EANTC) P2P test report.

The benefits of DPI for traffic management, while well known to service providers, are not well understood by consumers, and the entire debate about Network Neutrality has been misshaped by the lack of transparency. I’ll talk about transparency in a future blog.

3 Responses to “Blunt Instruments”

April 24, 2008 at 4:27 pm, rawsome said:

While the review on in path/out of path deployment models are good, I think most people would have thought the “blunt means” statement to more represent the effect of dropping/blocking/resetting all p2p communications. It wasn’t done in times of congestion or resource contention, or as limited means of traffic management to alleviate suddenly overcapacity links, but as a policy across the whole consumer base.

One question that I’ve been wondering from the out-of-path deployment model, how do you identify what’s a p2p session? Trying to identify via port numbers would seem to cause false positives and miss non-standard/http tunneled traffic (as was reported in this case, with lotus notes users being unable to pass traffic). Blocking any sessions that are long/large in size would seem to cause a lot of collateral damage too (hello Windows service packs!).

There’s also a question – how harmful can resetting one session be – when there doesn’t seem to be any evidence that’s what they were doing. From what I’ve read, rst packets were spoofed/flooded on all sessions. Also – how effective really would a p2p blocking system be that only stopped one out of hundreds of sessions? I’d want my money back if I paid for a solution that that was the best it could do.

Right now I’m downloading, via bittorrent, Ubuntu 8.04. It was released today and all the public mirrors are down. This is pretty much the only way for me to get it, and it’s working very well. I’d be quick to switch providers if I suddenly lost this ability – but many consumers don’t have a choice in high-speed ISPs.

Looking at this example – it seems there’s not a net sum difference between in the total bandwidth used between 100k people all downloading a file via http, and the same number redistributing it between themselves via p2p. One just seems much more fault tolerant and effective at handling these types of sudden large popular files.

April 24, 2008 at 4:49 pm, dirtywerx said:

Keep in mind that not only must the inline device be 10G by-directional full-duplex TODAY, it must also be 40G next month and 100G by end of year. It also must be less than 1 RU in size for multiple 10G full-duplex links and suck almost no power. In some folks minds it must also be able to perform full sonet or ethernet test suites (loopbacks near and far-end, BERT testing, signal quality enhancement/levelling, optical power adjustments, multiple wavelengths…)… On, and if you drop one in a Verizon or ATT POP it probably also has to be 48VDC and NEBS3 compliant.

Also, it probably has to be cheaper than the cost for more links/bandwidth such that increased costs related to this set of devices (knowing that optimally these are deployed at every cable-head-end or DSL subscriber facing interface) is below the other options available. It must also not impact the end-user pricing.

It’s a tough road to hoe…

April 28, 2008 at 11:14 am, Interesting Bits - April 28th, 2008 « Infosec Ramblings said:

[…] Blunt Instruments […]

Comments are closed.