Cyber War!
homeinterviewsvulnerabilitieswarningsdiscussionblank

interview: richard clarke
photo of clarke

Clarke was White House cyber security adviser from October 2001 to March 2003, heading up the President's Critical Infrastructure Advisory Board. In this interview, he discusses the many vulnerabilities of cyberspace, the evidence that points to Al Qaeda's cyber capabilities, his views on liability laws and federal regulation for cyber security, and why he believes the new U.S. National Strategy to Secure Cyberspace is "a great success story." This interview was conducted on March 18, 2003.

What is the significance of Khalid Sheikh Mohammed's training and background?

We're troubled by the fact that a number of people related to Al Qaeda -- including Khalid Sheikh Mohammed, who was recently arrested, and was the chief operating officer -- a number of these people have technical backgrounds. Khalid Sheikh Mohammed studied engineering at [a] university [in] North Carolina. He was employed for a while at the water ministry in the nation of Qatar in the Persian Gulf. Recently, a student at the University of Idaho was arrested by the FBI for alleged terrorist connections, and he was studying in a Ph.D. program on cyber security.

If there's a major devastating cyber-space security attack, the Congress will slam regulation on the industry faster than anything you can imagine.  So, it's in the industry's best interest to get the job done right before something happens.

So I think similarly to the fact that some of the Sept. 11 hijackers had training in flight training, some of the people that we're seeing now related to Al Qaeda had training in computer security.

What does this mean as far as attitude towards Al Qaeda's interest in cyber war?

Well, the fact that these people are gathering skills in cyber war capability is very troubling, combined with the fact that we know that they're looking on the Web for hacking tools. We know that, because we've seized some of their computers. It suggests to me that Al Qaeda may be trying to grow an indigenous cyber warfare capability. I think it suggests that someday we may see Al Qaeda, if it's still alive and operating, use cyberspace as a vehicle for attacking infrastructure -- not with bombs, but with bytes.

[Does it make sense for an adversary to use cyber as his weapon of choice?]

For an organization like Al Qaeda that is looking to leverage its investment, to have the biggest possible damage for the least possible investment, cyberspace is a good bet, because it doesn't cost a lot of money to develop these skills. You could have an effect in a number of places simultaneously, without being in those locations, and you can achieve a certain degree of anonymity and a certain degree of invulnerability to arrest [or] apprehension.

How big a problem is it that software companies have not been motivated up to this point to make their product more secure?

Until very recently, and perhaps for some software companies still today, quality control of source code has been a low priority. Getting the product to market fast has been the highest priority. This has meant that there are some very sloppy mistakes, things that just don't have to happen. Buffer overflow is one phenomenon where good quality control and good practice would mean that that would never happen; yet buffer overflow is present in hundreds of software programs.

So we really do need to insist that there be standards for quality assurance in the production of source code for software. Once it's out there, it's very, very costly and time consuming to change, to so-call "patch" a piece of software. As we've just discovered, [people] don't apply the patches, they don't apply the fixes. So there are hundreds of thousands of systems out there that are never fixed, even when the software vulnerability is identified.

Microsoft did that, because they didn't use the patches. What did that say?

I think the fact that Microsoft hadn't applied its own patch of its own system says it's difficult and time consuming to apply these fixes. We're working on ways of making the patching of software easier. The real solution is to make the software right in the first place -- to quality control a set of accepted industry standards that can be applied by all software companies. And all the software companies can say, "This software was developed and written with the Good Housekeeping Seal of Approval method of quality assurance." The software companies need to get together and do that, to come up with that set of standards, and stop selling sloppy software.

Joe Weiss, an expert in control systems and SCADA systems, said that most of the system's electrical power world industry work off of Windows 2000. The security scenario on how one fixes it is now about 50 pages long. His statement to me was, "Nobody does it. Everybody uses it out of the box." What's the problem, as defined by that?

All too often, what we find is that the procedures necessary to go back retroactively and fix a piece of software are too cumbersome. Moreover, if were a systems administrator and you were told, "You have to apply this additional software to fill the hole, to patch the hole" with something that you're running, well, you may patch the hole by applying it. But the result may also be that other software that you're running then becomes incompatible. So you've created another problem for yourself.

So a lot of system administrators, rather than go through all of that, not know what's going on in their own system, just don't apply the patches, or they take months to apply the patches. What we saw with the Sapphire worm, or we saw with Nimda, or we saw with Code Red, was that the vulnerability had been identified, the patch had been issued. But people hadn't bothered to put it on, because it's just too cumbersome, too hard to do, and you don't know what effect it's going to have on other pieces of software.

I've been urging the software industry to get together and form a consortium, so that when company A issues a patch, company B, C, D, E and F will say, "This is how that patch works with my system," so that you, as a systems administrator, will know, "I can apply this and it won't break the other systems."

So why hasn't that happened?

The software companies are very competitive. They have not, to date, gotten together to create a Good Housekeeping Seal of Approval for quality assurance for drafting code in the first place. They have not gotten together for creating this test bed of multiple software systems, so that we'll know, when we get a patch, whether or not it can be safely applied.

The government can't order them to do that. I have been hectoring them to do that now for over a year, and there's some interest. But no one, no software company has stepped forward to be the leader to say, "I will do it."

But we could order them to do that. We could put regulations down there.

Once you start down the path of regulating software, I think you run the risk of creating a lowest common denominator approach, a homogeneous approach, which actually lowers quality control. As we've recently seen with the FCC decisions about long distance and broadband, federal regulatory decisions become very politicized.

If you live in Washington, D.C., for the last several months there have been ads on every local channel on TV, full-page ads in the newspaper, hectoring about a change in the FCC rules. Millions of dollars are being spent by lobbying firms. I don't think we want to regulate software and end up in that same sort of situation.

What about liability laws?

It is both a problem and a blessing that there are no liability laws on software. It's a problem, because it means that there's been very little incentive for software companies to get it right. It's a blessing because, frankly, given the low quality of software these days, the software industry would end up looking like the asbestos industry. The asbestos industry has been sued out of existence in liability courts. Frankly, that would happen in software, if there were a legal basis for suing software companies.

Microsoft. How effective have they been in improving security?

In speeches around the country, I've been giving Microsoft credit for its halt work order, where it stopped drafting code, and it's now pushing security consciousness procedures. And every time I give Microsoft credit in a speech, I hear laughter and cries of derision in the audience. My response has been, "Don't doubt Microsoft, don't laugh at Microsoft. Hold them to their promise. They say they're going to start making secure software, let's hold them to that promise. Let's demand to see that they're actually doing it." I think the good news is, so far, they appear to be doing it.

But what about everything else that's already out there? What do they do about that?

They're going back and they're looking for holes in software that they've previously sold. That's one of the reasons Microsoft has issued so many patches in the last year, because they are going back and finding things. They can only go back so far. Really what we want them to do is make sure that products issued in the future are flawless, from a security perspective. I think they're trying. But the proof will be in the pudding in the next several years.

Are they doing it now?

They appear to be doing it now, and they're saying that the next big issue of their operating system in one to two years will be really secure. I hope they're right. But I think the public needs to demand that they do finally issue a secure operating system. We also need to make that demand of other companies as well -- of Sun, of Oracle -- and we need to demand it of the open source operating systems like Linux.

The bottom line on what the Mountain View case showed --

I think the bottom line on the Mountain View case is the ease with which people can do virtual reconnaissance from overseas on our physical infrastructure and on our cyber infrastructure, and the difficulty that we have in knowing what is being done. We were lucky in the case of Mountain View, that there were good people watching. It's probably occurring in lots of other places around the country, and we don't have people who are catching it.

The idea being pushed now is having a single market design to nationalize the power grid. Does that worry you? Does that create more security problems, or less?

Right now, our electric power companies, both the generating companies and the distribution companies, have paid very little attention to security in cyberspace. It took them a long time to even admit that they were connected to the Internet. Now they know that they are. Now they also know that they're running a control software, SCADA, that is available to our enemies, because it's software that's sold around the world. They are beginning to understand that they need to have security. And the Federal Electric Regulatory Commission is beginning to understand that it needs to regulate that, in order to create an even playing field.

In this one case, I think federal regulation makes sense, because without it, these electric power companies are not going to pay attention to security.

So what would you suggest?

I'd suggest the Federal Electric Regulatory Commission create an even standard for all power-generating companies and all power distribution companies, and a high standard that's achieved in several steps over the course of the next several year.

With what results?

I think SCADA systems need to be encrypted. People who have access to them need to do authentication, so that we have a high level of authentication, so that no one else can get in unless they're authorized. We have a high level of encryption, so that if somebody does get in, they can't change the system.

But we also need to make sure that our control signals -- the signals that we send out over the electric power grid -- are not sent and clear, they're not broadcast on radio, but they're on fiber optic cables that are not connected to the Internet, and the messages are encrypted.

That sounds like a costly move across the country to accomplish. Is there any reason why power companies, who have been hit very hard with the economy, is there any reason why they would do that on their own?

Unless power companies are required to do that by the federal government, they will never do it, because they're now in competition with each other. They're all willing to do it if they're all forced to do it. For once, we have the companies saying they want it to be regulated, so that they're all required to do it simultaneously. There's the even playing field, and no one has competitive disadvantage by proving security.

Will this administration go and do this?

The Federal Electric Regulatory Commission is independent of the administration. I think they are beginning to move in that direction. I think they do recognize the cost. If it's evened out in several steps over the course of the next several years, the cost [to] the ratepayers, the consumers, will be very small. But the benefit to the country, in terms of securing our electric power system, will be very great.

This is my guess, but I would assume the folks in the power industry would respond, "Hey, it's not our problem. It's the software. It's the fact that the software is vulnerable. Why don't you regulate the software company?"

It's not just a matter of the software. It's about access to their networks. Right now, it's possible to access electric power control networks, because they're broadcast in the clear, and they're broadcast on your radio, or you could hack your way in from the Internet. So the companies need to have a little bit of money spent on two kinds of software security -- encryption of their control messages, and authentication devices -- to make sure that only people who are supposed to be on the system are on the system.

There's a third thing they can do, which is to make sure that their message traffic--

There's a third thing they can do. They can move their messages to control the electric power system over fiber optic cables that are no way connected to anything else. If there's any part of cyberspace which we should take out of the Internet, disconnect from the Internet, it's the control of the electric power system. Yes, you can do it with software, and yes, you can do it over fiber optics. But those fiber optics should not be connected to anything which, in turn, is connected to the Internet.

The significance of Slammer?

I think the so-called Slammer Sapphire worm must have been a really big wake-up call for a number of people, because at one level it seemed just to be one company software for one kind of product, Microsoft software for servers. Yet, many, many things other than servers were attacked. In 15 minutes, 300,000 servers were taken out. Previously, other attacks had taken 24 hours to achieve that. Fifteen minutes, before anybody could even be notified the attack was going on, 300,000 servers were taken out. And it wasn't just servers that were affected; 911 systems were affected; ATM machines were affected; reservation systems for a major airline were affected.

Canada was trying to hold its own online referendum election. They had to cancel it. There were effects on routers, there were effects on databases. So it proved the interconnectivity on all of these different pieces of cyberspace. It proved how one hole in one piece of software from one kind of device can quickly spread throughout cyberspace and be extremely disruptive. Companies that had done a really good job of protecting themselves were nonetheless devastated, because they had forgotten one little hole that was one chink in their armor, one convection that they didn't know about. I think the effect of that ... should have been a wake-up call.

But the worm could have been much more damaging than it was. It could have been attached to a very destructive payload. The fact that it wasn't leads me to think that it may have been a test to see what damage could have been done. The next time it might have a very destructive payload.

But that problem has been fixed, right?

That problem has been fixed. But similar problems are still out there, and every month we discover similar problems. At any given time, someone can discover one of the new vulnerabilities, take a highly destructive payload, attach it to that new vulnerability, send it out into cyberspace with a worm and have a very devastating effect -- much worse than the Slammer Sapphire worm.

Other events that have taken place recently that you are focused on as being significant?

Within the last quarter of the year, there have been three events that really are disturbing. The first was a denial of service attack, not on a company, but on the domain name system, which is sort of the traffic control system of the Internet. For the first time, somebody was attacking the mechanisms of the Internet, and they did a very good job. The only reason why the attack stopped was because the attackers stopped it, not because the system was able to deal with it.

So that's going after the Internet itself. The Sapphire worm was devastating and proved the interconnectivity of systems.

The third thing that's happened this quarter is that we've discovered a major vulnerability in a program called Sendmail, which is widely distributed. Probably half the people in the United States who have e-mail are using Sendmail. There was a major hole in it that would have allowed anybody to hack their way into the computer using the Sendmail system.

Now, this one is a success story, because the private sector and the government worked closely together, found what the problem was, and distributed the information quietly and secretly to fix that vulnerability before a hacker could use it to attack us. But in one quarter, [there were] three things, each of which could have been devastating for cyberspace. That's going to happen every quarter. It's going to happen every year until we get it right.

Were you able to accomplish all that you had wanted to?

I think the National Strategy to Secure Cyberspace is a great success story, for two reasons. One, the product itself is an agenda for the government and the private sector, five priorities, very specific things, very technical things, and a challenging tough agenda. It's also a success story because of the way it was developed -- by having 10 town meetings around the country before we developed, and by having thousands of groups, companies, private sector organizations give us advice and participate in the drafting.

We had an open transparent participatory process -- very unusual for the federal government when it's developing a security strategy. The very way in which we developed the strategy raised awareness of the issue. So I'm very proud of the product. The challenge now is whether Tom Ridge and the Department of Homeland Security, who get most of the job of having to implement it, will do so.

The critics of the report come out and say it has no teeth.

People who say the strategy doesn't have teeth are really saying that it doesn't require federal regulation. I have never been in favor of federal regulation for cyber security. I think regulation creates a lowest common denominator approach, creates a homogeneous security environment, which is easy to attack, and politicizes the issue.

What we really want is not the government dictating to the private sector, as though the government knew better, because it doesn't. What we really want is cooperation. That's, I think, something we can achieve. In the end, if we don't achieve it at an adequate level, we may be forced to go to some regulation. But that would be an admission of failure -- that we had failed to get the cooperation of the private sector.

If we don't get it, will you be one of the first people out there, saying, "We've got to rethink this? We've got to go out and order it."

If there's a major devastating cyberspace security attack, the Congress will slam regulation on the industry faster than anything you can imagine. So it's in the industry's best interest to get the job done right before something happens, because after something happens and our economy has been really badly hurt, there will be regulation.

Critics also say that there's nothing about money to fund the changes.

I think the Bush administration actually should get some credit on the money side. When he came into office, the federal government was spending $2.7 billion a year to defend its own federal records. Then the budget for FY '04, the president has just sent to the Hill, he's asking for $4.9 billion. That's something like a 70 percent increase in funding. So when the critics say we're not spending enough money, I say, "Take a look at the books."

But the complaint that there's not enough money spent on initiatives to help infrastructure to make the changes?

People have to understand what a strategy document is, and what a budget document is. The strategy document is not a detailed budget document. A strategy document outlines the policy, outline the broad programs, and then you develop budgets around that.

The real test of the administration will be whether there are subsequent budget requests of the Congress, ask for enough money to implement their strategy. I know people in the Senate and House will be looking for that. I'll be looking for that. If the administration doesn't ask for the money it needs to implement the strategy, I'll be one of a lot of people making noise about that, and trying to get the money anyway.

Lastly, on the report, some people complain that there's nothing about the use of sophisticated technology fixes.

We tried in the national strategy not to recommend specific detailed technological fixes because technology changes so rapidly. A strategy should last for a couple of years. The National Strategy to Secure Cyberspace was drafted in a way that it should have a lifetime of at least three to five years. Therefore, it did not go into specific technological fixes.

We interviewed a master hacker, who says that one of the weakest links in the infrastructure is the U.S. communications. He says in the interview, "There are three main nodes" he could easily take down. What are your thoughts on this?

I think U.S. communications companies vary widely in the physical security of their cyber security. Companies need to ask some really tough questions about their Internet service provider, and about their telephone company, because there is a wide difference in the security quality available. When you're buying an ISP, or you're buying a telephone company, you never are told about what the security is behind the company. You tend to look purely at what the rates are, and go with the cheapest guy.

Well, when you go with the cheapest guy, you frequently end up with the least secure service. Yes, the U.S. communication system, like all of our infrastructure, is vulnerable. Some companies are more vulnerable than others, and it pays to find out.

You talked about SCADA systems. The master hacker we interviewed stated that the reason that they were vulnerable was because of the Microsoft systems, Windows 2000, NT. All of them are tending to use the same software.

There are four or five companies, two or three of them European, that make the SCADA software that's widely used in the electric power industry and manufacturing. They all have security vulnerabilities. But the biggest vulnerability in the SCADA system is that they're not encrypted, and the users don't have to really authenticate themselves. If we can get those fixes made, then we'll greatly improve the security and skills.

Cyber security for the country is now being more or less taken over by a much larger organization -- the Department of Homeland Security -- which has a lot of other concerns. Are they up to the job to deal with the problems at hand?

The National Strategy gives the new Department of Homeland Security the lead in implementing most of the programs required. It's a big challenge. They're merging 22 organizations, five of them having something to do with cyber security. In any merger, things get lost. I think we all need to try to help the department, but I think we all need also to be critics of the department.

If, for any reason, the department drops the ball on cyber security while it's worrying about aviation security or port security, we need to raise a flag, and we need to raise a ruckus. We have asked that department to carry a huge burden on securing cyberspace. If it doesn't look like it's doing a good job, we need to blow the whistle. It's too early to tell right now whether they'll be able to do it or not.

In a world where there are enormous threats -- North Korea, dirty bombs, biological weapons -- why should anybody be that concerned over the question of cyber war?

We, as a country, have put all of our eggs in one basket. The reason that we're successfully dominating the world economically and militarily is because of systems that we have designed, and rely upon, which are cyber-based. It's our Achilles heel. It's an overused phrase, but it's absolutely true.

It could be that, in the future, people will look back on the American empire, the economic empire and the military empire, and say, "They didn't realize that they were building their whole empire on a fragile base. They had changed that base from brick and mortar to bits and bytes, and they never fortified it. Therefore, some enemy some day was able to come around and knock the whole empire over." That's the fear.

 

 

home :introduction : interviews : experts' answers : faqs : vulnerabilities : warnings?
discussion : readings & links : maps : producer's chat
tapes & transcripts : press reaction : credits : privacy policy
FRONTLINE : wgbh : pbsi

published apr. 24, 2003

background photograph copyright © photodisc
web site copyright WGBH educational foundation

 

 
SUPPORT PROVIDED BY