Cyber War!
homeinterviewsvulnerabilitieswarningsdiscussionblank

vulnerabilities: software?

Some of the most recent worms that have affected computers worldwide took advantage of software vulnerabilities that were previously known to manufacturers. Although many companies maintain that they are doing their best to prevent and self-correct for inadvertent vulnerabilities, critics say the manufacturers should be held more accountable for software security.

Here are excerpts from interviews with Amit Yoran of Symantec; John Hamre, former deputy secretary of defense; Richard Clarke, former White House adviser on cyberspace security; Joe Weiss, a security consultant for KEMA Consulting; O. Sami Saydjari of Cyber Defense Agency; Scott Charney, chief security strategist at Microsoft; and a hacker who spoke on condition of anonymity.

photo of yoran

Amit Yoran
Vice President, Managed Security Services Operations, Symantec Corp.

read the full interview

Why are the [software] vulnerabilities 81 percent higher than the year before? What does that mean - the fact that we found that many more vulnerabilities?

Well, we found a lot more vulnerabilities in software because software's increasingly complex. A lot of code is being developed that doesn't have a security assurance process as part of its development. So there's an increased focus on finding vulnerabilities by security companies, by clients, and by hackers. And there's an increased reporting of vulnerabilities once they're discovered.

Vulnerabilities are either flaws in computer software, like a bug, or flaws in the design. So it was written properly, but it was designed poorly, so that someone can use the software or the computer in a way that it was not intended for use. So they may be able to take control of the computer. They may be able to get access to resources they were not supposed to have access to. All sorts of unintended consequences can occur.

Why does this happen? There's such a thing, I guess, as code review.

The code review process and the entire software development process does not have an appropriate level of emphasis on security. The consumers and clients of most software companies are so demanding of new features and capabilities that those features take priority over better software development practices and techniques. So our demand for new features essentially fuels the fire of increased vulnerabilities in software.

How do you safeguard malicious code from being inserted in software?

There's code tampering tools. There's code protection tools. Our belief [at] Symantec is that you really have to have a layered approach. You can't rely on any one particular solution to protect you. It has to be a combination of network-based tools, of specific computer system-based tools, and also application level tools, which work hand-in-hand to provide a multi-layered approach to security.

It sounds like a very complicated way to protect a system. Why is it so complex?

Well, the defense of computer systems is complex, because we are constantly discovering new vulnerabilities in software that we thought was secure. So it's a constant game of cat and mouse, where hackers are finding new vulnerabilities and then security professionals are trying to make sure that either patches are available, or that the systems are protected using some other technique. That's why these layered types of approaches seem to offer enough protection so that if there is a flaw discovered, there's enough other layers to protect you.

Could the software developers do a better job?

Clearly, software developers need to place greater emphasis on the security of the code they develop. But the market has to force them in that direction. Instead of constantly pulling them down the new features road, [which] as consumers we've been accustomed to, and we continue to demand new features, new capabilities, and by voting with our dollars, we don't emphasize security.

The last aspect of that is you've heard a lot of open source applications were trojanized with back doors last year for future attacks. Why did this happen?

There's a number of instances where sites which offer software for download were compromised. Hackers were able to remove the software which the developers provided, and were able to put trojanized software with back doors and they did other sorts of malicious things, and users would download that instead of the actual source code, the intended source code.

photo of hamre

John Hamre
Deputy Secretary of Defense (1997-1999)

read the full interview

Now the question: Are these folks insinuating time bombs in software that they can then trigger at a later date? It certainly is a theoretical possibility. Again, part of the difficulty of that is that cyberspace changes so dynamically, is that people are taking down their old version of Microsoft Office and putting up a new version. So all of a sudden that's off, where it was loaded is gone. This sort of thing is happening all the time.

So you don't have a lot of confidence that you could plant a surreptitious code designed as a back door or a trap door, and come back to it three years later. You could come back two months later, maybe, but not two years later. So again, I think there tend to be some self-correcting qualities that are a byproduct of a very dynamic industry, a very dynamic media. ...

photo of clarke

richard clarke
Presidential Adviser for Cyberspace Security (2001-2003)

read the full interview

There was a little hiccup in the White House telephone system, whatever it was, last Feb. 7. It turned out that ASN.1 [a data transmission standard] was the vulnerability. What did that show you?

What we discovered was that an academic team at the University of Oulu in Finland had realized that the basic encoding language that's used to turn ones and zeros into code, into programs, had a mathematical vulnerability in it, so that if you asked for certain things to be done that couldn't be done, the program could break. The result of that was that if you knew how to do this, you could send one malformed packet at a device like a router or a server and cause it to shut itself off.

That was a little chilling, that knowledge. So when we found out, we told the president, the vice president, and we worked very quickly, working with leading companies around the United States, leading IT companies, to encourage them to come up with ways of preventing that kind of attack. And they did. They did great work in a partnership with the government.

Then they went out and fixed all the software and all the critical computer systems around the country, all fairly quietly in a race against time, because if the knowledge of that vulnerability had gotten out, and any bad guy hacker had wanted to use it, they could have caused a lot of disruption.

photo of weiss

joe weiss
Security Expert, KEMA Consulting

read the full interview

How vulnerable are these systems and the infrastructure system in general, due to the software that they use?

The basic fact is unless you were to do almost a line-by-line review, and a comprehensive test from essentially a cyber security perspective, what would you know? One other concern we have with the UCITA -- the Uniform Computer Information Transactions Act -- is it says that a vendor can put in a trap door or a trojan and not have to tell you. That's only a law apparently in Maryland and Virginia. There is a tendency if you get software from a vendor and if you think it's a credible vendor, to accept it at face value.

What are the effects on critical infrastructures, though, that this is a reality, that one never quite knows what's in your software?

We do very comprehensive factory testing of our systems. But I'm also going to put in a caveat. The comprehensive factory testing we do is to ensure ourselves that the system performs as it was designed to perform and it meets the purchase specifications. If there is some form of a software, trojan or something else, that is not part of the design checkout, you wouldn't know. Now in nuclear [plants], you're supposed to do a line-by-line verification of all code. But if you look at fossil [fuel plants] or others, there isn't that requirement.

The question that is brought up often is: Shouldn't you be forced to protect your systems better because this is a national security issue? And the easy thing for many people to say is, well, it's not us. It's the system we buy. It's the software that we buy, it's the Microsoft, it's the SCADA system makers or whatever. They're the ones that should be regulated.

I don't think so, but I'll add one other thing. With control systems, vulnerabilties can occur beyond just the software. So even if you have that part taken care of, there are other vulnerable aspects. Obviously you've got Microsoft issues. But that's something different and beyond. But if you had all of your software absolutely secured, there are still other things. Remote access is one of the biggest, biggest vulnerabilities we have, and that has nothing to do with software.

photo of saydjari

o. sami saydjari
President, Cyber Defense Agency

read the full interview

The fear is that somebody will get into the development process, whether it be offshore in India or whether it be here in the United States, it really doesn't matter which of the two. But they'll get involved in the development process of our key systems, routers that we use to control the network, switches that we use to control the telephone networks, SCADA systems. And they will have pre-placed bugs, Trojan horses, trap doors into these computers.

So, sometimes people argue that, well it would be very hard to come from the outside, penetrate into the insides of these SCADA systems and be able to get control of these inside assets because there's all these firewalls and protections. But the fact is that you can bypass all of that by going through and getting into the development process.

photo of hacker

hacker
Information Warfare Expert

read the full interview

The problem is that Microsoft maintains that they can solve the problems internally as opposed to, for example, the open source movement, where anyone, such as myself, can review the source code and if we find a problem we can fix it, we can notify others of the problem, and in general you have a lot of people working to make sure that your software is secure.

It goes back to a principle, again, of there's no security in obscurity. If you try to hide your problems, they're not going to go away.

But Microsoft does code review all the time, right?

...the sophistication of an operating system, particularly the size of Microsoft's operating system, is that it's over a million lines of code. I don't know any organization that can keep track of a project that size looking for security problems, which one of the most common security problems in the world is a buffer overflow. You do not understand exactly how many people keep writing buffer overflow vulnerabilities into their systems simply because they do not follow good and accepted programming practice, let alone accepting security practice. Security is not a domain that is well understood by the guys who write products.

But Microsoft says that security is top priority.

Security is not Microsoft's top priority. Security is a concern for Microsoft, because even the United States government was telling Microsoft that if it did not improve the security of its products that it faced review of being allowed to sell products to the United States military. That is a very large market and enough to get Microsoft's attention.

Once you can tinker with a Microsoft system at home and find its vulnerabilities in your free time, if 95 percent of the machines in the marketplace are running the same operating system or have the same vulnerability, then you have what we refer to as a mono-culture: Everything is vulnerable.

But this is just the nature of the beast, isn't it? You can't create perfect software.

You can't create perfect software but you can create better software and software that works securely. The two are not incompatible.

But this is their business. They're in trouble if they're start getting sued all the time.

No one is suing them. There's no product liability in security. If there was product liability in security, as there is product liability in any other product flaw, Microsoft wouldn't exist.

photo of charney

scott charney
Chief Security Strategist, Microsoft Corp.

read the full interview

...what I tell [people] to do is look at what Microsoft has done since announcing the Trustworthy Computing Initiative. What would you have us do as a company that we're not doing today? We're doing a security push on every product. We're building things that are secure by design, secure by default, and we're fixing patch management to keep you secure in deployment. ...

Critics [say] that there's a large percentage of Microsoft, for instance, that's written offshore, and that that leads to problems. How is that viewed?

There is concern about what they call "offshore code," not just in Microsoft, but throughout the industry. The question is: Do you have quality assurance built into your process? And we do, so that code gets reviewed by people, it gets checked, it gets tested. That's really what you have to do.

Is all code that comes in reviewed?

All all code gets reviewed, no matter where it comes from, even if it's developed right here in Redmond.

Is it your belief that the process that Microsoft uses to code review, to go through code once it's written, is pretty near perfect?

Yes, in terms of making sure that the code is what it's intended to be. Now, as you can see with vulnerabilities, sometimes you design code and you think you've done a great job, and it still turns out to have a vulnerability. And, in fact, in some recent vulnerabilities like in Sendmail, that product has been around for 15 years. And if you look at the vulnerability in the SNMP protocol, it was found by the University in Finland 20 years after the protocol was out there.

So when you do your quality assurance, in what you're talking about, which is somebody planted code that doesn't belong here, that's one kind of assurance that you have to do. The other more complicated thing -- because programming is part art and part science -- is ensuring that there's no security vulnerabilities in the code, just due to programming error or the way the code interacts. Our goal is to dramatically reduce the number of vulnerabilities. But, I don't think anyone at Microsoft would tell you that we expect to have zero vulnerability is possible in the next product.

Why is that such a problem today, those vulnerabilities and trying to find those vulnerabilities?

It is hard to find vulnerabilities in code, in part because the systems we build are fairly complex, and they have a lot of very rich functionality. And there are people who will attack the code in ways that you didn't anticipate. One of the things that we're trying to do as part of the security push, particularly with threat modeling, is anticipate how people would attack the code to secure it. But, historically, what we've seen is that you can try and build really, really secure code, and over time things will be found.

Some people will say that one thing that's called for is that code developers will need background checks. Is that something that Microsoft does? Is it necessary?

Microsoft does do background checks on some employees. It's actually a very difficult issue for a host of reasons. And there are some government and industry organizations, like the National Security Telecommunications Advisory Committee, which is addressing the issues of personnel security.

It has to be remembered that in some countries the kind of public information that one can get is very limited anyway. Globally, there's no set standard. And how much information you might actually get from a background check varies greatly from country to country.

Wouldn't that argue for again not having code written offshore because it's a problem in figuring out who the people are that are writing the code?

I don't think you can make the presumption that if you can't do a background check on someone they're evil. I mean, as a practical matter, the real challenge, and the real necessary step is to do good quality assurance.

 

 

home :introduction : interviews : experts' answers : faqs : vulnerabilities : warnings?
discussion : readings & links : maps : producer's chat
tapes & transcripts : press reaction : credits : privacy policy
FRONTLINE : wgbh : pbsi

published apr. 24, 2003

background photograph copyright © photodisc
web site copyright WGBH educational foundation

 

 
SUPPORT PROVIDED BY