Cyber War!
homeinterviewsvulnerabilitieswarningsdiscussionblank

interview: scott charney
photo of charney

He is chief security strategist of the Microsoft Corporation. In this interview he discusses the nature of the threat of cyber war, the measures Microsoft has taken to improve security, the issues of regulation and liability laws to enforce security, and the newly defined partnership between government and the private sector to secure cyberspace. This interview was conducted on March 20, 2003.

Define your role at Microsoft.

I think strategically about how to better secure products, services and systems. How do we make our services, like Passport or MSN, more secure, and then also how do we protect our own networks better?

Let's talk about the overall picture first. How has the security threats grown over the past few years?

Let's look at the historical perspective. The predecessor to the Internet, the DARPAnet, was built as a military communications network. What that means in practice is that the users of that network was a trusted users group. It was the U.S. military, government contractors working for the U.S. military and academic researchers working for the U.S. military. So it was built to be functional, but with no security.

Is regulation really an effective way to get where we need to go?  To what extent will regulation stifle innovation?  Because if you tie down industry and say, 'This is what you must do,' then you also tie down the technology.

Then in the 1980s, IBM came out with the first commercially viable PC, and the government said, "The Internet should be a public resource." Suddenly you had everyone flocking to the Internet, and it had no embedded security. As our general population became more computer-literate, there was a segment of that population that said, "Wow, we can abuse these computers and do bad things."

What we've seen is a constant growth in computer crime and attacks on computer networks. So the security problems has escalated considerably over the last few years.

Lessons learned from events such as Code Red and Slammer?

There were a lot of lessons learned. There are two primary ones: One is that vendors have to do a better job of building secure software and systems. Two, that users of technology have to do a better job of securing their networks. So it's really a shared responsibility.

How does Microsoft view its responsibility, especially to critical infrastructure?

Microsoft believes it has a humongous responsibility in this area, in large part because our products are so widely used and support so many systems. So we have a very large responsibility, and we take it very seriously.

How do you view the threat of cyber war?

I think it's important to understand that, historically, we have not seen cyber terrorism attacks, and I think there may be some reasons for that. First and foremost, it's not as easy to take down the Internet as some might believe. There's a lot of redundancy, a lot of resiliency in the system. The second thing is you have to think about the motives of the attacker. An attack on the Internet will not yield the kind of graphic pictures that you saw, for example, on 9/11.

The other thing to remember, of course, is that when you attack the Internet, a lot of the harm so far has been economic. The economy has absorbed a lot of that harm, and it's actually reconstituted itself fairly quickly. So the question is: Is this really a good target for terrorist activity? Most of us in the field are more concerned that someone would have a targeted terrorist attack coordinated with a physical attack.

So if, for example, someone had attacked the Internet or Verizon 10 or 15 minutes before the planes hit the tower, it could have made emergency response all that much more difficult, and created a bigger sense of chaos.

So the whole view of needing to worry about a cyber Pearl Harbor attack -- what's your take on that?

One of the things we learned at 9/11 is that what we previously assumed were normal risks -- turned out we weren't thinking far enough ahead. So I think we do have to pay attention to these kinds of emerging threats and be sensitive to them. But I still think today the concern of a broad, sweeping global Internet attack that had long enough staying power is not our number one threat today.

This is an issue of national security. Many [who] are totally tied into the Internet are vulnerable. But it's an odd sort of issue, because the government doesn't control the infrastructure -- 90 percent of it being controlled by the private sector. What are the responsibilities in this world on these questions of the private sector?

The private sector clearly has to partner with government to protect the Internet. One of the interesting things that happened in the 1990s is the government said that the private sector is the group that is designing, deploying and maintaining these infrastructures. We don't want to regulate that, because it might stifle innovation. It's been a huge economic engine for growth and jobs.

So industry, in response, said, "That's right, these are our infrastructures, and we will secure them. Our business model requires that we secure them. If you're a phone company, for example, if you don't have a dial tone, then you won't generate any revenue." So there is an industry interest in securing these infrastructures.

The really interesting point, though, is what we've essentially done is delegated public safety and national security to market forces. In fact, markets are not designed to do that. It is true that markets will provide a level of security, but it may not provide enough security to protect against low-probability -- but potentially very damaging -- threats.

So what industry and government now have to do is to figure out how much security you'll get through the marketplace; figure how much security we need to protect public safety and national security. The government and industry have to work together to bridge the gap.

To some extent, that has sort of started off where Dick Clarke's report on cyber security -- before I go into details on that, what was Microsoft's involvement? How were you guys involved with the report and with the debate in Washington over those issues?

To be clear, it actually started before that, with the President's Commission on Critical Infrastructure Protection. But with regard to both -- the national strategy, in particular -- Microsoft participated by responding to the questions that Dick Clarke posited to industry. We worked with industry groups to comment on the national strategy, and we commented to the White House on the strategy as well.

The critics say that the cooperative partnership is just not enough, that the industry will not respond to the security needs that are necessary at this time, post-9/11, especially with a bad sort of economic atmosphere out there.

Let me say a couple of things to that. To the general criticism that industry won't respond, Microsoft is living proof that that's just not true. I mean, we had 11,000 developers stand down as part of our security pushes on products. We invested over $200 million in securing Windows Server 2003. We have our trustworthy computer initiative. Industry is responding.

The other broader question, too, is: What does the government do that's actually productive? I mean, if the government were to come out and say, "Let's regulate," and say, "Thou shall have good security," what does that mean? How does it get you there? Is regulation really an effective way to get where we need to go? To what extent will regulation stifle innovation? Because if you tie down industry and say, "This is what you must do," then you also tie down the technology.

So I think there are a lot of reasons not to go in a regulatory fashion. Moreover, if you look at what companies like Microsoft and others are doing, they're making huge progress.

Some people, though, will say it's not progress enough, it's not quick enough. It's not only Microsoft, it's not only software companies. It's people with infrastructures that are not securing SCADA systems and stuff like that. Because of the fact that the threat is so great, it has to move quicker, and that the carrot without the stick just doesn't move things along quick enough.

There actually is the stick: If you don't move along quickly enough and do a good job, there will be regulation, and then other bad things may happen as a result. So I think industry is very sensitive to that issue.

We do have to move quickly, no question about it. But we also have to move deliberately. We have to be careful that we do the right things, and we get the appropriate return on investment for our effort.

Tell me about the trustworthy computing initiative. What is it?

The trustworthy computing initiative launched on Jan. 15, 2002, with an e-mail from Bill Gates to all employees, saying that we were going to focus on trustworthy computing. I think the genesis of that is the fall before, where you saw the attacks of 9/11, which made people reassess risk, and also things like Nimda, Code Red, which showed how critically important it was for us to secure systems and build more secure software. So that's really the genesis.

This initiative is a company-wide focus on four real attributes: Building software and services and systems that are reliable, secure, private, and is built by a company with business integrity. Those are the four pillars of trustworthy computing.

So you shut down for a couple of months. What happened? What did you do?

The biggest security push was in the Windows platform. We had 8500 developers stand down. The first thing they got was training on writing secure code, based on a book by Michael Howard and David LeBlanc which is publicly available. Programmers historically had learned to program for functionality, not security.

Then we did extensive code reviews of the entire code base. That was followed with threat modeling, where you look at your code and figure out how bad guys would attack it. Then finally, that was followed with penetration testing, where you actually attack your own product.

That process, the security push -- we thought it would take four weeks. It actually had a standdown to 10 weeks.

So you basically red teamed your own product?

Absolutely. We red team our own products, but we also do it at three different levels. We have the product group attack their product. That's good, because they know their product. But it can be bad, because they know their product; they don't think outside the box.

The other thing is that when they report a vulnerability or a security problem, they report it up the same chain of command that needs to ultimately ship the product and make the ship decision. So we have a second group do penetration testing. That group reports to me. I'm in a cost center, not in a product group. Then we also hire outside parties to come in and attack our products as well.

So every new product that will go out will be tested?

Absolutely. We will do a security push on every product. What that means is the product development cycle will be longer, but it will be longer in the name of security.

What has come about as a result of the security initiative? Are we better off? And by what measure do you judge this?

At the end of the day, the biggest measure is: How many vulnerabilities are being reported in our products, and how severe are those vulnerabilities? But we have other, more short-term measurements.

For example, we recently released information about a security vulnerability in IIS, the Internet Information Server. One of the things we did is go back to look at Windows Server 2003 and see if that vulnerability was found and fixed as part of the push. The answer is yes, it was. So we're seeing that the push is having concrete results.

Critics add that there's a large percentage of Microsoft [software], for instance, that's written offshore, and that that leads to problems. How is that viewed?

There is concern about what they call "offshore code" -- not just in Microsoft, but throughout the industry. The question is: Do you have quality assurance built into your process? And we do, so that code gets reviewed by people. It gets checked. It gets tested. That's really what you have to do.

Is all code that comes in reviewed? The material that is brought from offshore -- is all that code then reviewed?

All code is reviewed, including code made domestically. You have to review all code, because you want to make sure it functions in expected ways, and you want to make sure it's secure. So all code gets reviewed, no matter where it comes from, even if it's developed right here in Redmond.

But people still say that you've got millions of live code, and it's just impossible to review that. How does one do that?

Well, actually, when you develop software, you don't think of reviewing the entire code base as one exercise. Code is marginalized and compartmentalized. So different pieces of code do certain things, and you work on a modular level.

Is it your belief that the process that Microsoft uses to review code, to go through code once it's written, is pretty near perfect?

Yes, in terms of making sure that the code is what it's intended to be. Now, as you can see with vulnerabilities, sometimes you design code and you think you've done a great job, and it still turns out to have a vulnerability. In fact, in some recent vulnerabilities, like in Sendmail, that product has been around for 15 years. If you look at the vulnerability in the SNMP protocol, it was found by the University in Finland 20 years after the protocol was out there.

So when you do your quality assurance, in what you're talking about, which is as somebody planted code that doesn't belong here, that's one kind of assurance that you have to do. The other, more complicated thing -- because programming is part art and part science -- is ensuring that there's no security vulnerabilities in the code, just due to programming error or the way the code interacts.

Our goal is to dramatically reduce the number of vulnerabilities. But I don't think anyone at Microsoft would tell you that we expect to have zero vulnerability possible in the next product.

How did that create a problem? Why is that such a problem today, those vulnerabilities, and trying to find those vulnerabilities?

It is hard to find vulnerabilities in code, in part because the systems we build are fairly complex, and they have a lot of very rich functionality. People will attack the code in ways that you didn't anticipate. One of the things that we're trying to do to secure it, as part of the security push, particularly with threat modeling, is anticipate how people would attack the code. But historically, what we've seen is that you can try and build really, really secure code, and over time, things will be found.

One of the criticisms is that maybe it's not moving fast enough, because the vulnerabilities can cause the hackers to gain access. What is the real situation?

One of the challenges is, how long do you work on the security features before you ship the product? One of the things that we did with Visual Studio .net and Windows Server 2003 .net is, we actually delayed the shipping of the product, because of security reviews.

So we are taking more time to get security right. As I said earlier, it's not likely or expected that we'll release a product with zero bugs, but you do want to release reasonably secure products. Fortunately, in a way, with the dot-com bubble bursting, Internet time has disappeared. There was a period of time where everyone had to rush everything to market right away, it was almost suicide not to.

But in today's environment, we do have the flexibility of delaying product ship for security, and we're doing that. That's part of our commitment to security.

Some people will say that one thing that's called for is that code developers will need background checks. Is that something that Microsoft does? Is it necessary?

Microsoft does do background checks on some employees. It's actually a very difficult issue for a host of reasons. Some government and industry organizations, like the National Security Telecommunications Advisory Committee, are addressing the issues of personnel security.

It has to be remembered that, in some countries, the kind of public information that one can get is very limited anyway. Globally, there's no set standard. How much information you might actually get from a background check varies greatly from country to country.

Wouldn't that argue for, again, not having code written offshore, because it's a problem in figuring out who the people are that are writing the code?

I don't think you can make the presumption that if you can't do a background check on someone, they're evil. I mean, as a practical matter, the real challenge, and the real necessary step is to do good quality assurance.

I guess the nightmare fear out there is somebody, an Al Qaeda person, putting trap doors into software, and if that were to happen, there's no controls at this point to sort of stop something like that happening.

To say "There are no controls" is a bit of an overstatement. There are controls in place. The other question is: Is that going to be an effective way to do something? So if you put a back door into a program so it basically has a vulnerability, what happens when that vulnerability is exploited? Like we've seen with other exploits, patches are built, the systems are patched, and the problem is fixed.

Let's talk about patches. You supply patches when the vulnerabilities are found. But they're often not used. Some people say it's too hard to use them. What is the situation with patches?

First of all, it is true that patches are very difficult to use today, and we have to improve the patching process. It's kind of interesting to see why patch management is broken today. I've spent a lot of time on this issue. One of the things that actually makes Microsoft a great company is that it's decentralized and empowered.

But what happened was, when products needed patches, the product group has to make the patch. Some product groups would build an installer; some wouldn't. Some would have their patch register with the operating system; some wouldn't. You ended up with a lot of different technologies in place.

It's actually not a technology problem. The fix is really a business fix. So now we have a patch management working group, and we're coming up with standards for how we're going to deploy patches to make it a lot easier to deploy. We do have to make it easier for our customers to remain current.

How long is that going to take?

You're going to see major progress within this year alone, because some of the problems have been known for awhile, we just needed to get the developer teams galvanized and start roadmapping the work and getting it done. So today, for example, Microsoft has eight different installers for patches. Within a year, we'll be down to two installers: one for the operating system, and one for applications.

Some people say, for instance, Windows 2000, the baseline security settings, takes a staggering amount of work to protect it, and this is used in a lot of infrastructures. They basically say, at this point, it's impossible to do like a 50-page book that defines the settings that you have to set it up. I talked to one SCADA engineer about this. He says the normal practice for folks out there in power plants is that you use it out of the box. Where does that leave us?

One of the things that that question highlights is the need for what we call "security usability." We have to make security a lot easier to use, and you see a lot of attention to that detail. So Windows Server 2003, when you pull the product out of the box, you can tell the product you are going to be a file and print server. By telling the product that, it knows that it's not a Web server, port 80 should be closed.

In the old model, you look at the configuration guide and say, "I need to turn off that port." In the new model, it will self-configure for security by default. We have to have the machines do more of the work, and that requires us to innovate around security.

What do we do about the situation that a lot of these very critical infrastructures are using the old software? Is it irrational fear that these infrastructures are vulnerable because of this? What do we do about it?

Well, there are three things that people can do to patch legacy systems and make sure that their security settings are current. The first, of course, is there are configuration guides, which tell them how to manage their machines. The second thing is we have built lockdown tools, tools that you can run to make sure you're locked down.

And three, of course, you can also use integrators and consultants who can help you find a baseline and lock down your system. I mean, because the products have a lot of functionality, they also do have to be managed. But I think now that there's all this attention paid to security, you will increasingly see tools designed to help manage the security of the products including legacy systems.

What people are telling me now is that it's not happening yet, that that's not an issue. The guys involved in that type of infrastructure, it's the last thing on their mind. So what has to happen? What's Microsoft's role in that?

Microsoft is building those tools. The configuration guides were actually a community effort with the Center for Internet Security and the NSA from the government. But when you say in your question that this is the last thing on a systems administrator's mind, then I know part of the problem is education of those system administrators. If security is the last thing on their mind, they're not paying attention to the threat model that we see today. So part of our responsibility -- we do a lot of public speaking, and we hold conferences and everything else -- is to tell them that this is a real threat. They need to focus on it.

There's a lot of documents, like the President's Commission Infrastructure Protection Report, the new national strategy, that focuses a lot on the educational issues.

The other question is who has to play that role to sort of make sure that this stuff happens? Is it Microsoft? Is it the engineers involved in these infrastructures? Is it government?

Making sure that people are aware of how important cyber security is, is a shared responsibility of the government and the private sector. There are a lot of people in government who have been focused on these issues. There are many companies in the private sector that are focused on these issues. We all have to be out there, evangelizing computer security.

How quickly is that going to happen, and how easy will it be for those folk to accomplish what's necessary to accomplish?

Tools to assess your security and lock down your systems are already available today. So for example, there's an IIS lockdown tool. There's the Microsoft Baseline Security Analyzer. Those tools are out there today. We need to make them more robust, and we need additional tools. We're working on that. But there are things that people should be doing today to make sure their systems are secure.

Why don't they know about it? Some of the people I've been talking to are SCADA engineers who have been before Congress and are talking about this and complain about Microsoft. They say there should be regulations. Regulations should be against the providers of the software, not the folks involved with the infrastructure. They complain that you guys are the problem, not them.

You'd have to ask them why they feel that way. Actually, I don't think assessing blame helps at all. We have a responsibility to build secure products and services, and we're doing that. Users of systems have a responsibility to run anti-virus, keep their firewalls up to date, patch when the patches come out, and to manage their systems responsibly. It's a shared responsibility, and we just have to treat it as such.

Slammer -- people say there was a patch out there, but that you didn't use the patches yourselves because it was so difficult, or whatever, and Microsoft got hit. What's the story there?

The truth of the matter is when Slammer hit there were patches available. The original patch, MS02-39, did not have an installer. Now we have a rule about providing installers with the patches, and we think having an installer would make it easier to install.

Managing complex IT infrastructures has always been hard. That's one of the reasons we have to simplify that process. But I also need to be clear: Almost all of our externally facing systems were patched. We have a lot of developers in this company, many of whom have direct access to the Internet. Not all of them were patched. And yes, we were affected by Slammer. But it's an overstatement to say we weren't patched. That's not quite right.

But the community out there sort of looks at it as saying, if the guys who make it, the guys who make the patches don't protect themselves completely, what does that say about the system itself, the idea of trying to patch these vulnerabilities away?

I think that patch management today is more difficult than it should be. I think we have to do it better, and we're working to make that happen.

As we have increasing confidence in systems, as Microsoft puts out the next generations with less vulnerabilities and such, when will it come to the point where Microsoft will accept liability for failures in its systems?

The product liability question is actually a very difficult one. People tend to think, "Well, we should just impose liability for software that has some sort of vulnerability." The reason it's so difficult is: What does that regulation or liability look like, and can you deploy it fairly?

So, first of all, it would be completely inappropriate to say you should have liability if there's any bug in software, because that's beyond any reasonable standard. But also, in terms of fairness, there's a huge difference between the software industry, for example, and other industries with liability, like the automobile industry. There is no group of automobile manufacturers who give away cars for free. There is no open source of automobile movement. There is, by contrast, an open source software community.

So if you were going to impose liability on the software industry, how do you do it in a fair way? Or are you only going to impose liabilities on companies that actually pay a lot of taxes and create a lot of jobs? Can you do this equitably? No one has answered that question yet.

Explain it to me. For instance when you go to a site that provides software for free, and you pull it down, you don't quite know when the vulnerability was inserted into the software?

No. If you look at the open source community, it's a community effort to build software that they give away for free. The problem is that if that software doesn't work the way it should, who do you sue? You don't even know who wrote the code. And the people who wrote the code, because they're not selling the software for profit -- where does the resource come to pay the liability claims? So the only issue is, if you're going to impose liability on a market segment, you need to make sure that all people in that segment are treated equally and fairly.

Some critics will say if software companies don't have this stick to be aware of, there's less likely a chance that people will pay attention to the security issue.

So what I tell them to do is look at what Microsoft has done since announcing the trustworthy computing initiative. What would you have us do as a company that we're not doing today? We're doing a security push on every product. We're building things that are secure by design, secure by default, and we're fixing patch management to keep you secure in deployment. If you impose liability in an effort to change our conduct, what is actually going to change, since we already have religion and we're doing the right things?

The other interesting thing is that if you impose liability, you have to ask if that's a cost-effective way to get where you want to go. When companies start paying liability claims and legal fees and everything that comes with it, where does that money come from? Well, you can raise the cost of the product, but that might be counterproductive, because one of the great things about software is how the price has been driven down so it can be available to everyone.

The second thing you can do is take it out of profit, which means it comes out of the investor's pocket. Or you can take it out of cost, perhaps by paying people less, and driving your best security people right out of the company. So one of the things that has to be figured out is if you do this, will you be incentivizing things that aren't happening today? How is this funded? Is it an effective way to fund security? People need to ask and answer those questions.

Why has it taken as long as it has for secure by default policy to be initiated?

One of the interesting things was that markets were not demanding security. When I started doing cyber crime for the government in February 1991, and I started to working the hacker cases, I would run around and scream about the need for more security from a public safety/national security perspective. But the truth of the matter is: Markets weren't demanding it and people were still buying technology more for its functionality than its security. The education about cyber crime had not really reached its peak.

Now things have changed. There's finally synergy between markets, public safety and national security. Customers are demanding security. The threat model has changed, so secure by default is something that you can easily deploy now.

You need to understand that, when you put something in secure by default, you get less functionality out of the box, because services are turned off and ports are closed. So in a market that's demanding functionality, you enable everything by default. When everyone's demanding security, that's when you can do security by default.

Define "secure by default" for me.

Secure by default means that, when you take the product out of the box, or download it and first install it, it is configured for security, as opposed to having everything turned on for functionality. In the old days, you might load a product and every port would be open, every process would be running. Today, when you take the product out of the box, it does very, very little, and you have to decide what to turn on, and what exposure you're going to have.

Will every product that is now released be in that mode?

Yes, this is one of our core secure by design, secure by default, secure in deployment. In Windows Server 2003, for example, more than 20 services are turned off by default.

Is Microsoft worried that if they do this step, people will start complaining about functionality, saying, "I've got to do this stuff to sort of turn everything on. What a pain in the butt?"

It's a concern, but we need to innovate around security. So when we released IIS Version 6 in beta and locked it down by default, customers called and said, "You broke our applications." We said, "No, you just need to turn on the service that that application needs." As we've matured in our approach, we now can build technology that enables the right things at the right time. So Windows Server 2003, if you tell it you're a Web server, or you're a print and file server, then the right things turn on, but everything else is left off. So we've got to innovate around security.

Yesterday in the Washington Post, there was an article about the latest vulnerability that was found. Can you define what that is, and why it's a problem? Is it a critical problem, or critical vulnerability?

Yes. The IS vulnerability was listed as a critical vulnerability. A critical vulnerability is one that would allow the propagation of a worm or a virus, because it doesn't require user interaction. It's not like a user has to click and open an executable file. This was really a vulnerability that, if you put an extremely, extremely long string in the address bar of the URL -- over 64,000 characters -- you could cause a buffer overrun. When a buffer overruns, you usually drop to a command line and can take control of the machine.

Of course, there was a workaround for it, but we also issued a patch.

So how big a problem would that have been for people with computers if they didn't realize that, if they didn't have the patch?

It would be serious. It would be critical, in the sense that it could allow for the propagation of a worm or virus. We've marked it as critical, and encouraged everyone to download the patch.

When you can overrun the buffer and take over the machine, you can then get the machine to execute any code that you wanted to execute. By capturing over the machine, you can take information [that] you're not supposed to have off the machine, or you can cause the machine to send communications that it shouldn't send. So you basically own the box. That's why it's critical.

How was it found?

It was found by a customer, who reported it to us.

What does it state about the vulnerabilities now, the fact that something as serious as that was just found by a customer? What does that say how much further we've got to go, how much further does Microsoft have to go?

We clearly need to go further in security. This was IS Version 5. It was a legacy application. One of the good things was we're now security-pushing our products. In Windows Server 2003, that vulnerability was found and fixed. So we're clearly going in the right direction. But we have a lot of legacy systems out there. We need to be vigilant, and when things are reported to us, we need to patch them.

If it was found and fixed in the new version, why wasn't it fixed in the old version?

Well, we do back port some fixes, and sometimes you just haven't gotten to it yet. But as part of the security pushes, we back port a lot of the things we find to prior legacy applications. So for example, when we issued Service Pack 1 for Windows XP, it included a lot of fixers that were a product of the security push.

As a novice, admittedly, looking at this, it seems insurmountable -- the number of vulnerabilities, the number of ways that hackers can use them, those that want to hurt systems badly, how they can manipulate them. Is it insurmountable?

It is not an insurmountable problem. You have to be very diligent, of course. We have to make the patching process easier, of course, and we're doing that. With our focus on security, we can get our arms around this problem.

You have to think about legacy systems, which we have to patch when problems are found. We have to make the patching process easier and more transparent to the user, so that the systems start to manage themselves more. Then in the future, not only do we have to do things like security pushes, but we have things like the next generation secure computing base, when we actually put security in the hardware, because you can only make software so secure. If you want more robust security, you need a hardware implementation as well. We're working on that too.

So when is that going to happen?

We're looking at two to three years out, because it requires a change in the hardware architecture. Microsoft is working with hardware vendors to make that happen.

Then every computer out there would have to have that in it -- which means basically the new systems would have it, but all the old systems will still be vulnerable?

That is correct. And it's important to understand that, even with those systems, you will still have a general purpose PC, and you can still do whatever you want with your computer. But you can also choose to only run applications that are signed by someone you trust. That eliminates a lot of hostile code.

But again, these all sound like developments [that will be] a long time before they're all in place. Does that leave critical infrastructures vulnerable?

Well, the key is to focus on the legacy systems today and patch them today. Do you have vulnerabilities today? Yes. Absolutely. Should we be patching them? Yes, absolutely. Is the risk down to zero? No, and it never will be either, because this is about risk management, not risk elimination.

Who's going to pay for all of this?

Ultimately of course, Microsoft is investing a lot of its own money in making things more secure. There are other things, like when you build the next generation secure computing base and it ends up a new hardware, customers will buy it. But what we've seen in other industries is that people are willing to pay for security features. People used to pay for anti-lock brakes before they were standard. Some people would want to pay for airbags. People are willing to pay for security, but they need to see value. As the threat has increased, the value of security becomes more self-evident.

Do we have to see a big disaster before everybody really takes these things seriously?

I would suggest that we've already seen some pretty big disasters. As far back as 1988, you had the Morris worm shut down the Internet in 24 hours. More recently, you had Slammer. I think people are aware of the problems today and are working to fix them. But they're complex problems, and they are going to take some time to get our hands around them in a comprehensive way.

How vulnerable are critical infrastructures?

Critical infrastructures have some vulnerability, but they're actually fairly resilient. When you've seen outages in critical infrastructures, either because of hurricanes or cyber events, they have generally been somewhat isolated and fairly resilient -- that is, they come back online fairly quickly.

Of course, you need to be very diligent about protecting these systems. But for the most part, they've proven to be resilient.

SCADA systems and their specific vulnerabilities. People tend to focus on SCADA.

Well, it also depends on where the market ultimately moves. A lot of SCADA systems today, if you look at, for example, how the phone system works, a lot of it is based on proprietary code written for a special purpose that is not generally available. Of course, as we move to commercial off-the-shelf products in some areas, that's one of the reasons why you need secure products.

Can a SCADA system be attacked? Yes, it can. Is it readily and easily attacked? The answer to that is no. There is security built into those systems, and key systems often have resiliency and redundancy.

Some of the people in that world are focused on how a lot of these systems are not that different from each other, and the trend is to become more and more of the same, so that they can all deal with each other -- systems that are used in Baghdad, Afghanistan, in Toledo and New York. Those people concerned about this call for diversity of operating systems. They think that's a necessity.

The issue between a monoculture or a dual culture, or a homogeneity versus heterogeneous environment is actually complicated, and I don't think has had enough study to date.

One of the good things perhaps about a heterogeneous system is that if one part of the system is attacked, the other part of the system may be immune. But the flip side of that is that you have to manage the heterogeneous systems. That requires specialized personnel in both areas, and sometimes the connections between those heterogeneous systems are a point of failure.

Homogeneous systems, by contrast, if you're all running the same system, people worry that a single event may take out the whole system. On the other hand, a single patch may heal the whole system. So more research has to be done in this area.

Some people say the next generation of software is not less vulnerable, but in fact is more vulnerable -- the technical capabilities of software are moving away from secure concepts.

I just disagree. I mean, software that was built to be functional in the past was built to be functional without any security, just like the Internet. Now people are focusing on security. You can provide functionality with security. You have to keep the focus on the security. Companies are now doing that. I just disagree with the statement.

Post-9/11, how has the threat adjusted the view of your responsibility?

I think Microsoft has a huge responsibility to protect our critical infrastructures. We have a huge responsibility because of our market share, and the number of customers who are running our products. As long as we have a large market share, we will have that responsibility, and we have to own up to it and we have to design more secure software. That's what we're doing.

And your response to critics who say, "Ain't enough," that you guys need to be more responsible because you are the major players, and a whole lot more has to be done?

If they have heard about our trustworthy computing initiative, and they think there's more that we should be doing, I want them to tell me what that is, because we are absolutely 100 percent committed to doing this right. If there are things that we aren't doing that we should be doing, we want to be doing them. I will add that we do have groups, like I have a Chief Security Officers' Council -- 30 CSOs from major companies around the world, global companies, who provide guidance to me on how to move forward in the zero-to-five year timeframe. We are seeking that kind of input, and if people have that kind of input, I want to hear about it.

There's a lot of threats. Where does the threat of an Internet attack fit?

When I think about the various threats we face, I think that certain threats like bioterrorism, or nuclear weapons, is a more severe threat and will do much more harm than a cyber attack. Having said that, we're capable of defending against multiple threats, and we have to take the cyber threat seriously, too.

Some people state, though, that the cyber threat is actually a weapon of mass destruction. We had people tell us -- scientists as well as hackers -- "You give me a couple of million dollars, half a dozen guys, some knowledge base, and I can take down your infrastructure. I can take down your electrical grid for six months." That's a weapon of mass destruction, since all infrastructure are tied into, let's say, electrical.

I think that's a little bit of an overstatement. I don't think it's easy to take down the entire electrical grid with six people. If you did take down the electrical grid, there would be people working around the clock to patch the vulnerability and restore the grid. Additionally, if the power grid were to go out, even for some substantial length of time, that is not akin to the damage of, for example, a biological agent that could ravage the entire planet.

But when the threat is coming from a group of people whose intent is to damage our society, and they believe one of the best ways to damage our society -- maybe even better than killing 3,000 people on one day -- is to knock out the underpinnings of the economic system. Some people say there's no better way to do that than by use of the Internet with cyber warfare tactics.

This is why we have to be vigilant. But I think they would be surprised at how resilient we are as a people.

But people out there that say the system is not resilient. The people might be resilient. But the psychological affect of turning off the electricity on one day, the next day taking out NASDAQ, the next day turning off lights in 911 systems in the southwest quadrant -- there's a lot of ways in the future that this can be used against us in a way that it would shake us to our core.

In fact, that hasn't been done yet. It's not as easy to do as one suggests, and with the attention on security, over time it will become harder to do. So in terms of cataloguing threats -- it's not that that's not a serious problem. That's one of the reasons Microsoft is so devoted to building secure products. But when you put it next to nuclear weapons or biological agents or chemical agents, then we need to keep it in perspective. We need to work on all these fronts at once. But I think right now the greater concern would be one of these other threats.

 

 

home :introduction : interviews : experts' answers : faqs : vulnerabilities : warnings?
discussion : readings & links : maps : producer's chat
tapes & transcripts : press reaction : credits : privacy policy
FRONTLINE : wgbh : pbsi

published apr. 24, 2003

background photograph copyright © photodisc
web site copyright WGBH educational foundation

 

 
SUPPORT PROVIDED BY