Lessons for the Internet from Swine Flu: Bear with me!

This morning on my drive to work I listened to a story on NPR about swine flu in relation to past epidemics. Just an hour or so earlier I had sent a message over Twitter that I was trying to avoid the flurry of swine flu chatter and focus on getting caught up on the EU Ministerial Conference on Critical Information Infrastructure Protection going on in Talinn. I also need to focus on Conficker stuff, specifically lining up some conversations about remediation on some upcoming trips.

All of these are linked, it seems, and there are some lessons to learn from swine flu with how we need to respond to Conficker and other Internet epidemics. Obviously this has a larger role in protecting national and international communications backbones. Suddenly my playlist is packed with stories over the past two days from public health officials talking about pandemics. I’m looking for lessons to learn and things we should be thinking about if the Internet is really similar.

It’s debatable if the Internet is something like public health. We talk about viruses and infections and often use biological analogies. We even based original worm spread models on epidemic models (but in fact they’re different). Because of this, many folks have even proposed a Cyber CDC to carry out research and coordination. Weren’t CERTs supposed to do this, like we learned after the Morris worm of 1988? But, just because we can draw the parallels (however flawed) doesn’t mean that it’s analogous to public health.

At it’s core as a set of technologies the Internet is simply infrastructure, communications infrastructure. It is just routers, packets, switches, fiber and copper, and ultimately bytes on the wire. This isn’t much different than the phone system. Its role in global commerce, communications or entertainment is no less significant.

But unlike the telecommunications infrastructure, the endpoints can cause outages via malcode, and the infrastructure itself is vulnerable to attacks from any endpoint. Furthermore data store on other nodes is vulnerable to outsiders eavesdropping or accessing. The water supply isn’t analogous; a stranger halfway around the world can’t modify the water in your tap. The telephone system isn’t analogous; it’s a gated network and devices can’t make arbitrary requests for resources.

Completely unlike any public health concern are areas of crime and espionage. Online crime is a hot topic and appears to be growing rapidly. I’m not aware of any robust numbers around them, there’s a lot of speculation based on a sample set that has an unknown representation of the true community. There’s also a lot of hype that some people throw in there to drum up visibility for themselves. But experientially it appears to be growing.

On top of that you have espionage and data leakage, either corporate or nation-state. Neither of these issues have much analog in public health, and it’s unclear to me the role of the ITU during abuses of the telecommunications infrastructure to commit such acts. Those matters are usually handled internally and far outside of any shared organization, they tend to have a polarized set of sides.

It is therefore infrastructure at risk of new attacks. It seems to lie in a new ground between public health, where you have to help address uncontrolled endpoints (people) and their ability to disrupt the world’s economic system, and the pile of telecommunications equipment it is. So can we still draw on public health models and pandemics when dealing with global events like Conficker that threaten millions of lives’ worth of finances or data and possibly the communications infrastructure the globe depends on? Maybe.

Thinking about the above, it seems to me that the following parallels from public health responses to epidemics are worth exploring.

We may really need some sort of global Internet health body akin to WHO. I don’t know if a “Cyber CDC” is what you need but some form of truly global coordination and visibility. What we have now sort of works, but is limited by competitive pressures and a horribly incomplete understanding of a complex system with an untold number of vulnerabilities.

Imagine a scenario where customers of drug company X didn’t get cold A but got colds B and C, while drug company Y’s customers got A and B but not C. We don’t have that so overtly, mind you, but you’d have a competitive landscape. You probably wouldn’t get cooperation between competing drug companies to defend against common diseases, enabling epidemics to form. Put aside the idea that people would surely die and focus instead on how one might solve this, namely making sure that all drug companies got the common things and could defend against them but could pick and chose among things that are less prevalent or less pressing for their immediate customer base. That’s essentially what we have with the current infosec landscape.

So, if we’re to have an accurate and complete picture of threats to the Internet (and hence global commerce), what would we need? What are the real threats to the Internet and how do you measure them? Can someone take all of the real time data feeds that we produce from our sensor networks and come up with an accurate picture of the state of the Internet? Where are those gaps and what questions need to be answered, with what tools, and in what format? Folks have tried and tried but we don’t seem to be getting anywhere. We’re a long way off of a true early warning system.

Next, what is the response of such an organization? What are its goals and its mission? The obvious goals are to stop the spread of whatever is causing problems on the network, and cure any victims if possible. Stop viruses and worms from spreading, when needed, and if someone has come under attack to stop the attack itself (packet flood, data exfiltration, etc).

As noted in one of the NPR pieces I listened to this morning, alert condition scales are for governments, not individuals. Ultimately any Internet monitoring group can only help inform and coordinate governments’ and major enterprises’ actions to protect their constituents. The idea that there would be a global body who could change anyone’s router or PC is unacceptable to almost anyone; even the most power mad of us would cringe at management nightmare that would be! However what your government, employer, or ISP could do in response to the threat – locking down infected PCs, for example – would be guided by this kind of information and guidance. This doesn’t yet address outsiders giving trained assistance, however.

One of the biggest issues we see right now in any global Internet crisis management is an unclear chain of command, begging us to always ask “who is in charge?” There’s a tremendous power vacuum that all too often gets filled by the wrong folks with the wrong skills, motives, or abilities. Also worth identifying are the emergency responders. In the event of a civil emergency we know who they are, they’re either full time or trained civilian part timers. When a crisis is encountered, what is the plan, who owns the decision making process and who do people answer to? None of this is very clear in most incidents, such as Conficker. This lack of concreteness and transparency hinders a successful effort.

Finally, to get a handle on the problem and to task efforts appropriately, accurate and complete infected population information is vital. Right now we have some good numbers on Conficker around the world but to think that we have this visibility for other threats is wrong. Every threat is different and so measuring populations is a challenge (AV company numbers are rarely right, by the way, we need something better) but no one said this would be easy.

Another challenge here for any such organization is time. Events like SQLSlammer demonstrate that problems on the Internet move a lot faster than they do in real life. By the time we had diagnosed the problem the Internet was crushed under a traffic flood. Defensive measures were in place by that point, even without global coordination, but global coordination would have helped save all networks faster. The Internet moves at the speed of light and problems move sometimes just as fast.

Finally to really address this systematically we need to stop treating the Internet as “something other” and start treating it as a key piece of infrastructure. Key policy makers shouldn’t try to prove that they “get it” by talking about how they use the Internet; they don’t say stuff like “I drive on roads just like you” or “My kids use the phone for school”. Every policy think tank and policy board should have representation of Internet infrastructure on par with public health and classic infrastructure (power, water, etc). It’s that key an ingredient in the global backbone at this point, even if its deployment to individuals is uneven (aka the digital divide).

I think there is adequate reason to look to established crisis management setups and learn lessons from history if we’re to provide reliability to the Internet infrastructure. There appears to be no shortage in the EU and the US to establish more significant cyber security policies and practices. Hopefully the above highlights questions that we need to answer, open avenues for research, and direction that so many ministers call for at events like CIIP. The time for empty platitudes is long over, the time for visionless talk is past, and there can be no more leadership vacuums. We have an opportunity, we need to seize it.

4 Responses to “Lessons for the Internet from Swine Flu: Bear with me!”

April 28, 2009 at 11:34 am, Arturo Servin said:

I imagine you have read “How to own the Internet in your spare time” (Staniford, Paxson and Weaver, 2002). In the last few days I have been thinking about how they modelled the worm spreading using mathematical disease spreading models. In the paper they also write about the establishment of a “Center for Disease Control”. I do not what happened to that effort.

-as

April 28, 2009 at 12:01 pm, cw said:

This is a significant work and a longer-term paradigm shift that integrates modern computing technology into existing mindsets applied towards other infrastructure. If an influential member of congress or other organization with cloud would take up this cause, we might see some shifts. But I think they will be gradual, and will require some re-working of capitalistic models as applied to the security industry.

April 30, 2009 at 5:39 am, Utopiah said:

You might also be interested in Computer Virus Epidemiology by Hao Hu, Steven Myers, Vittoria Colizza, and Alessandro Vespignani (related by Schneir http://www.schneier.com/blog/archives/2009/02/computer_virus.html earlier) and also Parasites and Infectious Disease: Discovery by Serendipity and Otherwise (ISBN: 0521858828) Cambridge University Press 2007

May 09, 2009 at 6:18 am, Lennie said:

Let me say I think the problem with an early warning system would be, you got to many warnings. If you update a mailservers virus-scanners every 15 minutes (and trust me you actually do get new signatures every single time), it’s still possible to miss new variants. That should give you an idea of how many new variants on virussses and malware are created.

Comments are closed.