Last week the New York Times printed an article by John Markoff entitled, Do We Need a New Internet? In the article, Markoff states, “…there is a growing belief among engineers and security experts that Internet security and privacy have become so maddeningly elusive that the only way to fix the problem is to start over.” Stanford’s Nick McKeown is quoted in the article, “Unless we’re willing to rethink today’s Internet, we’re just waiting for a series of public catastrophes.” The article speculates that in a new network architecture, some users would “give up their anonymity and certain freedoms in return for safety.”
It’s certainly exciting to see core computer science issues featured so prominently in the press! Indeed, this article has generated quite a bit of discussion in the research community. For example, while acknowledging that a new network architecture would certainly play an important role in improving security, Purdue’s Gene Spafford writes on his CERIAS blog, “Do we need a new Internet? Short answer: Almost certainly, no.” (Gene tells me that he will be interviewed on this topic on C-SPAN’s Washington Journal, airing at 9:30am on Saturday, February 21.) UCSD’s Stefan Savage is largely in agreement, saying that “the network is by and large the smallest part of the security problem” and that “at a technical level the security problem is really an end-host issue, coupled with an interface issue — lots of power given to lots of different pieces of software whose couplings present opportunities to bad guys that aren’t anticipated, at a social level its a human factors issue.” The bottom line is that, outside of resource management (that is, controlling DDoS) and attribution/accountability, the main sources of security risk are at the end points — a key point missed in the NY Times article. Peter Freeman perhaps puts it most plainly:
To be succinct, although technical improvements are clearly needed, a large part of the security issue comes back to people, not technology. If we could figure out how to educate people so they don’t respond to pleas from Nigerians who need to transfer money or they don’t leave their passwords on post-its or never install the frequent security patches that are issued, we could make huge improvements immediately.
That’s not to say, however, that reinventing some aspects of networking isn’t an important research goal. Peter Freeman, while he was the director of NSF’s computer science (CISE) division, was instrumental in helping to launch the GENI Project in 2004, with the goal of developing an experimental platform for exploring truly reliable and higher capacity networks. For Freeman and others, new approaches to networking were deemed an important area for government investment because of the basic nature of the research problems involved.
Mounting a global-scale effort such as GENI has been a major challenge for the computing research community, perhaps similar to what the astronomy community goes through when it decides to develop large telescopes. But the initiative has already had several ripple effects. Guru Parulkar, who was the NSF program manager for GENI at the start, went to work with Nick McKeown and helped start the Clean Slate Project mentioned in the NY Times article. The GENI effort also put Princeton’s Larry Peterson in the middle of things, as the PlanetLab Consortium was one of the most influential early inspirations for GENI. And now, a much broader visioning effort in Network Science and Engineering, or NetSE, supported by the Computing Community Consortium (CCC), is defining the critical research questions in a wide range of network-related areas.
As for GENI itself, significant progress on development of a prototype has been made, coordinated by a GENI Project Office (GPO) and involving a large number of academic researchers. BBN’s Chip Elliott says that a version of the testbed will be available for early testing in a matter of months, “which will allow researchers to investigate many core networking research questions, some of which are the thorniest questions for Network Science and Engineering, upon the earliest end-to-end prototype of GENI.” Ellen Zegura, Georgia Tech professor and NetSE Council Chair, cites the importance of this development, saying “For me, the deep technical issues of security and privacy are at the heart of the GENI effort, and one of the main reasons for developing it.”
The demand for better security grows with the public’s dependence on computing and networking. As Chip Elliott states:
Would our lives improve if all aspects of the Internet were firmly bound to real-world personal and organizational identities? Might total public transparency reduce crime and misbehavior – in short, might less privacy lead directly to more security? Is privacy already a vanishing concern, fated to disappear in a few years without widespread regret?
Careful thinking will illuminate these issues — particularly if coupled to a vigorous program of experimentation.
This, in a nutshell, is what the NetSE and GENI initiatives aim to address.
I am concerned that the community is selling GENI (and related next-gen Internet efforts) under the “more secure Internet” banner, guessing that this is what gets the general public excited, while knowing that the contributions of new network technology are likely to be minor. I suppose this is necessary to attract funding, but is not exactly full disclosure. (Infrastructure protection is likely to be much more promising, but, again, the reasons we don’t have secure DNS or secure BGP are much more economics and lack of ISP cooperation rather than the lack of miracle technologies.)
For example, if authentication truly was the problem, nothing prevents email providers from enforcing DKIM (sender authentication) or S/MIME, readily available technologies. The reason this does not happen is not the lack of network technology or lack of network science; rather, it’s hard to do this when state actors are, at best, ignoring misbehavior by their citizens. A new network protocol is not going to make Russia or Nigeria better at enforcing laws.
IPv6 was sold on the same premises, plus QoS, so there’s unfortunately some tradition to this approach.
“at a technical level the security problem is really an end-host issue, coupled with an interface issue — lots of power given to lots of different pieces of software whose couplings present opportunities to bad guys that aren’t anticipated, at a social level its a human factors issue.”
– where exactly protocols errors/weakness fits in the general conclusion?
– this guy is old enough, he must remember the start of the web. I DO NOT!
All i know is, we are still riding the same horse 🙂
How about an old comment from Ben Franklin:
“They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety”
There is no question that host insecurity is a serious problem. But so is network insecurity. Relying on people to recognize carefully crafted emails, apparently coming from colleagues, as attacks is not a sound basis for security. Can we perhaps agree that without better accountability for actions in cyberspace, the present situation is likely to continue or get worse? And that a better engineered network, bringing improved accountability, must be part of the solution?
In my experience there are many issues that get conflated in this discussion: improving security vs improving security research, securing the network itself vs providing security for network users, dealing with problems of the global Internet vs those of an enterprise network, new mechanisms vs new data, and so on. Because of this ambiguity it’s pretty easy to talk past one another.
Let me try to tease apart a small subset of these pursuant to my particular interests and views on the topic (warning, rambling text ahead composed on an airplane).
Lets first talk about improving security for Internet users (as opposed to improving security research or securing the Internet itself). As Peter writes, its true I’m not a big believer that new network mechanisms, by themselves, are well positioned to do much about our security problems. Let me explain this via a completely unfair analogy:
How can we improve our highway system to reduce crime?
Sounds ridiculous that way no? While criminals _use_ the highway system routinely, just like all of us, their motivations and their opportunities are not usually thought to be due to fundamental problems with the existing highway architecture. By the same token, most of our vulnerabilities today stem either from software issues (particularly the richness of the interactions that we permit and our inability to anticipate and prevent all the ways in which these might be abused; petar: I take this to include problems in protocols, implementations, etc) or psychological/sociological issues (the inability of users to understand the risks and potential negative consequences of their actions and their minimal interest in doing so). In turn our adversaries are motivated to exploit these vulnerabilities because there is significant economic value to be extracted in doing so (largely due to the success of e-commerce). None of these fundamental technical, social or economic issues are caused by the network and thus there are some limitations on how much a network-based solution has to offer.
That does not mean there is no role for network mechanisms. Lets consider what a network does: it gets messages from point A to point B. This suggests the two places where there is potentially a significant role for network security mechanisms: resource management (choosing whether or not to forward packets from A to B) and attribution/accountability (trying to verify/enforce requirements that packets from A are in fact from A and perhaps even provide subsequent evidence about the packet’s causal origin). The first corresponds roughly to the problem of DDoS defense, where there is clearly a place for network mechanisms (indeed, for many kinds of DDoS network mechanisms are really the only kind of defense that makes sense) and mechanisms for tracing packets to their causal origins (although this is at best only partially a network problem since proxies, WiFi hotspots and other kinds of “stepping-stones” allow causal origins to be laundered). The former issue has received a fair amount of attention, the latter less so (although some forms of network capability systems combine both resource management and accountability within a single feature). I suspect accountability is a somewhat less popular topic because of the social and political tension between those seeking to protect privacy and those seeking to increase accountability for the purposes of enforcement/deterrence… a classic kind of quagmire that many researchers like to avoid getting caught in. Wrt Jerzy’s earlier comment however, I suggest respectfully that quotes from famous figures tend to provide more heat than light when trying to move forward on such issues. To wit, all of us but the most libertarian have already surrendered liberty for safety and done so quite happily via things like safety regulations on drugs, licensing requirements for drivers, etc. I think the first question is not whether or not such tradeoffs should ever be made, but rather what kinds of tradeoffs could be made and the quality of their impact (both positive and negative). Absent people actually talking about this issue in a concrete and substantive way we’re all blowing smoke and waving our hands about abstracts — this doesn’t tend to get us anywhere.
Another way of looking at this overall question is to consider the context in which the network can make a useful decision.
In general, I think network mechanisms work well when there are strong invariants that can be asserted about the user/service environment. When one has good knowledge about the participants and their roles, one can create arbitrary restrictions on connectivity (i.e., you only get to connect to the services your job authorizes you to). Indeed, the work that Martin Casado, Nick and others have done on systems like Ethane are exemplars of this approach. To the extent that the management costs are cheap enough (i.e. extracting policies from knowledge about user roles and service roles) this seems like a powerful approach in the enterprise network environment. However, it doesn’t extend well to the Internet (big-I) since no one should feel confident that they know anything very well about a couple billion users and their machines. Moreover, even in the enterprise, none of this is going to do jack about drive-by downloads, spam/phishing, people executing software with attached malware via p2p networks, etc… since the vectors are completely compatible with normal use. Again, we aren’t going to get rid of crime by fixing the highway.
However, just because there may not be some new network Internet architecture that will render us all substantially more secure, that does not mean that there isn’t a place for new Internet infrastructure in support of _security research_. Now, this is typically taken to mean testbeds in which experiments can be used to evaluate new network-layer mechanisms under controlled circumstances and this kind of facility is useful for a range of important work (e.g. testing DDoS defenses). Indeed, we have a number of such facilities today for this purpose (e.g. DETER and the upcoming NCR). However, I think this is only part of the need and we do ourselves a disservice by focusing on it so single-mindedly when talking about the needs of security research.
In particular, I think we tend to over-focus on security as a technical problem and under-appreciate just about everything else. Thus, there is a commonly held fallacy that security research should be, in its entirety, either
a) like math: if you do it right you get the right answer; everything would be secure if only people did it the “right” way
b) like a hard science: the truth is out there in some platonic form and our only real experimental needs are control, repeatability, etc.
In fact, there is a great deal of experimental security that is fundamentally much more like a social science. This includes basically anything that interacts with how real users behave or how real adversaries behave; it is largely observational analysis and is gated by the availability of data. It is this “soft” work that, in my opinion, really requires more infrastructure support. I haven’t personally encountered much good security research being held back for lack of a modest-scale controlled infrastructure in which it can be tested, but the amount that is completely unknown about the behavior of attacks and of users under attack (or users not under attack for that matter) is just tremendous. An instrumented live network infrastructure that would allow real defense mechanisms to be observed in situ with real attacks and real users would offer a huge leg up for many researchers. In short “reality” trumps “control” for many security research efforts.
However, this kind of use faces a number of hurdles. In particular, it requires the sponsor to have a significant “backbone” in support of the work when legal and extra-legal challenges emerge. What happens when they get served with an RIAA letter? What if someone is writing a web crawler that visits the seamy parts of the net and it ends up slurping up kiddie porn along the way? What happens when your system for infiltrating botnets is “outed” by someone else and your network gets a retaliatory DDoS from the perps? What happens when a participant creates a blacklisting service that becomes popular and NSF gets sued by a site that has been so listed? This kind of stuff generally makes government agencies fairly nervous and I think there really would need to be both a change in mindset and high-level political support to let such an agenda go forwards; a topic for another time perhaps.
First, I think we can set Henning’s mind at rest that GENI (and its friends and alternatives) are not widely being sold on the basis of security (although, unfortunately, it seems to make good headlines).
There has already been much debate about the weakness of this argument. After all, it would be a pretty easy sell (but misguided) to imply that fixing the Internet plumbing will somehow plug the security holes in your operating system, or stop human users from doing unsafe things. To paraphrase Stefan, making the highway safer won’t help us stop the bank robbers.
On the other hand, I think there is pretty broad consensus that the Internet infrastructure isn’t the way it is because it is right forever more; it’s this way because it can’t change and adapt. We can wait for half a dozen vendors to bring forward their next product, or until a few dozen people in a standards committee mull on it for a few years. Or we can put tools and platforms into the hands of thousands of researchers, and hundreds of thousands of open-source developers worldwide, to see what they come up with. For sure, some of their ideas will be crazy, dangerous or just plain silly. But my money is on the brilliant few, who maybe have never taken a networking class, and will come from left field to show all the experts that there is a better way to plan, build, or manage a network. The open-source community has shown this power time and again – but normally they get to use a $500 PC. Put into their hands the means to improve the Internet, stand back and we will see ideas for networks that are more reliable, cheaper and easier to manage, faster, less susceptible to congestion, with better access control (in networks where it matters), and maybe less vulnerable to DDOS…. or just better in ways we can’t even imagine.
We can each have at most a handful of new ideas per year. But if we can help enable tens of thousands of people to test their ideas, then they can bring about change we could never do on our own. Trying to figure out how sure keeps me passionate about networking.
Focusing only on security when asking “Do We Need a New Internet?” does a
disservice to security and to the question of the Internet. The
answer may well be that yes, we need a new Internet, but the security
question will remain. Putting it succinctly: If you give me the
perfect high-speed, flexible, powerful network, and you attach
vulnerable hosts and software that makes it easy to fool users into
making poor security decisions, you have… an even worse security
problem than you did before.
Security is a game of the weakest link. There are no silver bullets.
We see this in today’s networks: If I can’t attack your IP stack, I’ll
send you an email with a malicious attachment. If that doesn’t work,
I’ll convince you to download a malicious screensaver. I’ll leave
free USB keys with trojan horses on them outside the bank or military
base. If you want a network that allows nearly any kind of
functionality to be implemented atop it—and we do—then you need
hosts and applications that are vastly more robust than anything we’ve
built to date. Security is as much a problem of user interface (cf
Lorrie Cranor’s work on security and usability); of
economics and incentives (ask Bruce Schneier); of formal techniques
(model checking and theorem proving), pragmatic bridges between theory
and heuristics (Dawson Engler’s company Coverity); programming
languages; operating systems (cf last year’s three Ph.D. graduates who
all worked on information flow control—Krohn, Zeldovich, Chong); and
network architecture.
(And many other problems — assertions to the contrary
notwithstanding, today’s internet does not guarantee anonymity
to those who most need it—whistleblowers, those subject to coercive
regimes, etc.—even while it does make it extremely difficult
to track down those who can use stolen credit cards or compromised
machines to launder their own access.)
Unfortunately, computer security _can_ be improved with some help from
the network. Administrators can configure firewalls to block known
bad sites or unused protocols. I call this unfortunate because it
creates a tension: would you like security, or would you like the
fundamental flexibility and openness to new protocols that made and
makes the Internet so successful? In a sense, instead of asking how
to redesign the Internet for security, we might be better off asking:
How can we protect the Internet from the stack of point
security solutions that vendors, administrators, and lawmakers try to
put atop it, while providing better robustness to malice?
The second point is that focusing only on security ignores the many
fundamental problems the Internet does have. One of them
is a facet of security—in particular, availability and
robustness, both against malice and accident. One is as Nick
mentioned above: the ability of the network to change over time with
as little global agreement as possible. Perhaps mobility; perhaps
scaling to hundreds of billions of connected devices; the cost and
complexity of managing networks; improved support for efficiently
meeting service-level agreements; the challenges of providing global
services that cross cultural, legal, and international boundaries;
the list goes on.
Improving the security of our global collection of computers and
networks is critical. Improving our understanding of how to build
networks—in and out of the context of today’s Internet—is also.
Conflating the two risks solving neither of these fundamentally hard,
extremely important problems.
Check out Computer Science and Security:
Paper submissions are invited in the area of computer science, in particular the technological advances and research results in the fields of theoretical, experimental, and applied Computer Science and Information Technology.
http://sites.google.com/site/ijcsis/Home
Internet Security problems are very dangerous .this problem is not solving. most of the problems ware comes under mail.
http://www.staffingpower.com/
Yes, I think so and agree with you too.