Q&A: Fred Schneider:
Fred Schneider, a professor in the Department of Computer Science at Cornell University, Ithaca, N.Y., served as chair of the National Research Council panel that produced the report, ÒTrust in Cyberspace.Ó
Fred Schneider, a professor in the Department of Computer Science at Cornell University, Ithaca, N.Y., served as chair of the National Research Council panel that produced the report, "Trust in Cyberspace." He and members of the Committee on Information Systems Trustworthiness call for a new model of security in the report, the timing of which Schneider called fortuitous. The report "was not started with the expectation that it would help the critical infrastructure protection agenda but that has become a national issue, as it legitimately should be," Schneider said. Schneider discusses report findings that challenge traditional views on the notion of security in a wide-ranging interview with Washington Technology Senior Writer John Makulowich.WT: Is the report simply an extension of earlier work supported by the National Research Council, or is it breaking new ground in security for networked information systems?A: The report says our society in the United States is becoming more and more dependent on these networked computing systems, and they are not trustworthy. More troubling is the fact that although they could be made more trustworthy by deploying some of what is known today, that is not being done. In fact, there are a lot of problems that need to be solved, and we do not know how to solve them. We lack the science and technology base. We build systems that are as trustworthy as we would need if they were controlling critical infrastructures. The direction is that critical infrastructures are being controlled by that. That's a troubling message.WT: What are some implications?A: There are two immediate implications. One is that we should mobilize additional research so we have the option to go in this direction. The other is we should visit whether we want to go in this direction. Some people immediately accept the view that we have to do the research because otherwise we will be in trouble. The other option is that there will be an electronic catastrophe. It will make the news. Everyone finally will wake up, and we will simply legislate that the power grid cannot be connected to the Internet, for example, and, if it is, you get fined. Or that the phone systems cannot be connected to the Internet. A lot of the problems people have with trustworthiness will go away, but at a cost of impeding growth or progress in some ways. WT: What are some of the key issues surrounding trustworthiness?A: One issue is that trustworthiness is holistic. It involves a lot of interacting properties that traditionally are dealt with by different subfields in computer science. The report is saying you cannot split up the problem that way and expect to glue solutions from each of those fields together and wind up with a solution to the trustworthiness problem. There are very few individuals prepared to take this big-picture view, who think about fault tolerance and security at the same time. Even if you are prepared to say, well, fine, we will just broaden who we have, if you look in academic research computer science, there are very few people who worry about the kinds of elements that comprise trustworthiness. There are precious few people in academic computer science who do security. Those who are going to do the research are not out there. Although security is certainly growing in interest, there are very few people who are actively looking at it relative to other areas.WT: Tell us about the new model of security called for in the report.A: First, the nature of the problem has changed. In the 1960s and 1970s, it was reasonable to think about having components interacting that trusted each other. If you had only one central processing unit and were doing time sharing, then obviously it trusts itself. In the 1990s and the next decade, the model is a lot of interacting, mutually distrustful components. There also is a change in mind-set from risk avoidance to risk mitigation. You cannot possibly think about building systems that are absolutely secure. Not only is it expensive, but it is probably not what you need. You only need a level of trustworthiness that is appropriate for the thing that you are doing. Some systems will need to be more secure, fault tolerant and, in general, trustworthy than other systems depending on the value of the assets they protect. To come up with a theory or an approach to look at the problem that was useful to the people who were doing infrastructures, it would have to be one that talks about mitigation, about getting as much as you need and understanding what it costs and what the benefits are. This thinking drives one to abandon the so-called Orange Book style of security [Department of Defense Trusted Computer System Evaluation Criteria; written for military systems, the security classifications are sometimes used in the computer industry], and think about different ways of building security. WT: What are some of the ways in which the committee pictures proceeding in building security?A: One way is based on what might be a conservation law, but nobody has been able to articulate it or prove it. That is to say, there is some amount of insecurity in whatever you are doing. The simple example used in the report is about two parties communicating who are worried about wiretappers intercepting the communication. They could send the message in the clear. The wiretapper would see it. There would be insecurity, and the message [would be] compromised. What they do is encrypt the messages. That way the wiretapper cannot understand what is being communicated. If you look more carefully at that system design, they did not really increase the security of the systems. To do the encryption, the two end points needed some kind of keys. They had to share those keys. Presumably they had some diplomatic courier carry the keys from one place to the other. If the courier got compromised, the fact that you are encrypting messages does not really matter much. What you did was move the insecurity ? that is, the confidentiality issue ? from being associated with the messages that were being sent to being associated with the key distribution. The reason that makes sense is because you think that either the key distribution is less vulnerable, or you think the threat model you are worried about is not as effective against the courier as it is against the messages that are in the air.WT: What can we generalize from that?A: The view that whenever you build a system, there is no notion of security. It is a notion of security relative to an adversary, a threat, your perceived view of who is going to attack you and what resources they have available. You look at a system and figure out what its vulnerabilities are. All systems have vulnerabilities because all systems involve assumptions. And assumptions translate to vulnerabilities. By violating the assumption, you get the system to behave in a way it was never intended. So look at a system, understand its vulnerabilities, look at what your threat is and then try to move vulnerabilities that are attractive to your attacker to places where there are vulnerabilities the attacker cannot get much from.WT: So shifting vulnerabilities is the key?A: There is a conjecture among the committee that one could look at most of the security mechanisms as techniques for moving vulnerabilities around in the same sense that there are financial instruments which move risk around. They do not create it or destroy it. They just move it to places where you think you understand it better. We are in a world where virus detectors really work, firewalls really work, intrusion detection really helps because it helps you find out what is going on in your system.The security theory of the '70s argues those things do not get you absolute security, so you don't worry about them. But that is not a good reason not to do it. If one were able to develop a new doctrine based on moving vulnerabilities around, we believe the doctrine would allow people to justify defense in depth, where each particular level of defense was not perfect. It would allow people to justify risk mitigation instead of risk avoidance, because it would allow you to talk about how much vulnerability you were prepared to tolerate, given what the threat was.WT: What are some of the factors driving this new view of security?A: It is partly driven by the fact that we want a way to account for the success of defense in depth that could justify adding layers and what they add. When you engineer a bridge, you have some idea of what adding the next extra girder to strengthen it will do. You know whether it is worth it. We do not have a way to know whether to add a firewall or not. What does it add? What does it take away from the picture? There is no way to justify that. And we need to. WT: Were any priorities assigned to items on the research agenda?A: The committee did not assign priorities to them. The committee found the commercial sector does not have a good track record, nor do they have incentives to engage in longer-term research questions. The question of who is best suited to address one or another issue is likely to come down to, is it longer term or shorter term? Companies are driven by the profit motive.However, Java is an interesting example, because they are in the process of evolving the Java security architecture. JDK 1.2, which recently was released, is a substantially different security architecture than the sandbox that came out originally with the Java tool kit. That level of innovation is something I would not have been surprised to see come from a university.On the other hand, this is their business, and if there is not confidence in the ability to write Java systems that are secure, then there is no incentive to use Java.WT: What are some of the research areas that could bear fruit in the three- to five-year horizon?A: In my view of the world, the three- to five-year time frame is really advanced development. Here you find all the issues associated with the ability to successfully deploy cryptography in networks. There is debate in Washington about public policy vis-?-vis cryptography. One of the committee findings was that cryptography is critical for protecting networks. Yet, in addition to whatever political impediments there are, there are some technological impediments. We do not know how to build and manage a large key infrastructure. We do not really know how to get things to interoperate, given that they are communicating over encrypted channels. We do not know how to manage routing and quality of service when you cannot even look into the headers of the messages.Cryptography and deployment of cryptographic infrastructures are areas where work is needed urgently, and it has a relatively shorter time frame. The other shorter-term item concerns quality of service issues. That really harks back to getting cooperation among mutually distrusting entities, sorting out the way the routing protocols work, say in the Internet. They are simple. They are elegant. But they do not guarantee certain things that one would expect to have if you are going to build a critical infrastructure on top of the system.WT: How about the research agenda in the longer term, in the 10-year horizon?A: That would cover the area of what security is going to look like, about moving vulnerabilities around. In the report, we discuss a new approach to security based on the use of languages. In the old days, people felt security had to be done in the operating system. That way it could be in a piece of code that was well-understood. Everything would pass through that piece of code in terms of execution. In the 20 years since that was articulated, we learned a lot about how to analyze programs. We now can implement all kinds of security policies by looking at the program before we run it. There is this new field growing, called language-based security, where programming language techniques are being brought to bear on the security issues. As a community, we only realized it existed a year or so ago. It is fairly new, so it is speculative whether it is going to work out. But there are a lot of very interesting results on what you can do by doing language style analysis on programs as a way of getting security, especially the kinds of efficiency and the kinds of fine-grain control you can exert if you are analyzing a program, as opposed to doing things at the level of operating systems abstraction.In an operating system, you can worry about files and executing programs. Within a program, you can worry about accessing a particular variable and doing the right thing to it.WT: Does commercial, off-the-shelf (COTS) technology come into the security picture?A: There is this whole question about what to do with COTS. The COTS world is not going to change. We need to understand how to build a system where some of the internals of the components are not known, where you do not even know the process used in constructing them. You may not have a good specification for a component, because COTS producers often expect users to discover features as a way of learning their system rather than writing extensive documentation.WT: What about plug-ins and add-ons?A: This is what we refer to as mobile or foreign code. This is new and growing, and some of the language-based security has its best application in that setting. In fact, it has grown up in response to those problems. This raises the whole question of how you build a system and convince yourself it is going to be secure, even though you do not know what the system will look like a year from now. And that is because somebody will have added plug-ins in the field. There is considerable incentive for system manufacturers to build systems that are extensible. They maintain their market presence, and they keep their customers locked in. You provide a way to add that functionality, functionality that might not have existed at the time you wrote the system.There is a commercial reason for people to deploy extensible systems. Yet we do not know how to design them in a way that you could protect them from errant extensions.WT: We use the term critical infrastructures to refer to energy distribution, transportation, communication and finance. Is that the extent of it?A: It is not clear what the definition of a critical infrastructure is. Who would have thought that a Web browser would be part of a critical infrastructure? Yet, since everybody knows how to interact with the Web, it is the obvious user interface for a system you are building. What is happening is components are being deployed in contexts they were never intended for. And the manufacturer may write a disclaimer, such as, "not to be used in life-critical situations," but that does not prevent that engineer somewhere, who has to get his job done, from actually using it. Where things become critical is not cut and dried. It is part of the current national dialogue.
Fred Schneider
NEXT STORY: Trustworthiness: A New Approach to Security