hackers
homewho are hackers?the riskswho's responsibleprotecting yourselfinterviews

are software companies irresponsible?

photo of reid & count zero reid & count zero
Reid and Count Zero are members of the Cult of the Dead Cow (CDC), a hacker organization that developed "Back Orifice," a computer program that allows the user to remotely view and control any computer running Windows 95 or later.

Count: . . . People are saying, "Oh, there are going to be a lot of people who are just. . . really mad at CDC for [releasing Back Orifice]," because their computers could potentially be abused because of these vulnerabilities.

Our take on this was, "Well, they should be really mad at companies like Microsoft, who create these environments that are just so unstable." We take it for granted now that computers will crash several times a day. We take it for granted that you have to be afraid when you get an email attachment; you have to figure out where it came from. "Is it worth it to open this spreadsheet where I might blow up my computer?" We've developed a kind of culture of a passive, beat-down fear. . . . If you got in your automobile and every day it would stall several times, and every once in a while it would just sort of randomly explode into flames and destroy all of your personal belongings, like when your computer crashes and you lose your files, you would be really mad, and furious at the car manufacturer. . .

I think it's a real travesty that we see . . . these insecure environments as the way it has to be, because, "Heck, it's always been that way." The people who are calling the shots in terms of building it are just building them their way, and they don't care. . . .

Reid: It's more than just Microsoft producing what amounts to almost a negligent security model in their operating system. It's also the fact that they're marketing it specifically to end users who want to go on the internet, people who may have bought their first computer ever. Those people are not computer security experts. They don't know what's out there.

So it's like building a really cheap car and saying, "Now, drive this on these really rocky roads," deliberately putting them in an environment where you know that what they have designed is so inadequate for that environment, and marketing it to student drivers. . . .

It seems patently obvious to the layman that if you point out this fundamental flaw, it will be fixed. Why isn't it fixed? Why don't they fix it? . . .

Count: They won't change something unless the people demand it. That's the trick. And people are not demanding the security. . . .

Reid: Although, in all fairness, we should point out that the beast on Microsoft's back here is the fact that they need to be backwards-compatible with previous versions of Windows operating system, which themselves were insecure. So there may be legitimate technical hurdles for them to overcome in order for a new version of Windows to have, in our eyes, nice security. But then again, what kind of software company do you think could take on a challenge like that, if not Microsoft? Do you think anyone other than the world's largest software company could pull that off? And if they can't, then we're all in trouble.

photo of James Christy James Christy
Computer crime investigator for the US Department of Defense.

The hacker's view of the world is that a lot of these problems come down to the comparison that we're driving cars on the internet, and that the wheels are falling off. . . . It isn't designed properly, and they're not liable for the flaws when the wheels do fall off.

. . . On one side, yes, there's a problem with how software manufacturers are manufacturing their software. . . . If the software manufacturer didn't know about the problems, that lets me know that they didn't do adequate testing. You're supposed to test for out-of-boundary conditions and things like that, especially when there are known holes, or this is a product that's notorious for having holes in it. The other side is that when they do know and they don't let anybody else know. That would be the equivalent of the struggle that Firestone is going through over the tires. When did you know? What did you know, and when did you know it?

. . . Corporate culpability is what really troubles me, because you're right. They aren't brought to any court of law and told, "Well, your software is insecure, and you have to make restitution for the money I lost." The other side is that it is insecure, and we are working with an information superhighway full of potholes and bad bridges or however you want to describe it. . . .

Are we in a fairly unique situation? A situation where you can buy software, even some hardware, that basically doesn't work very well, which can cause the user damages, but the user has no restitution at the moment?

Let me give you an example from a firewall. The firewall comes to you in one of two configurations. Either everything is turned on, and you allow all the services, which means it's a firehose; or everything is turned off, meaning you can't send anything through it, which means it's a firebrick.

Well, it's not a firewall. You construct the firewall. It's given to you and you configure it. And the manufacturers are always able to fall back on that argument of saying, "Well, you configured the system." We're all put in the position of regarding our computers of being a shade tree mechanic, where we're reconfiguring our cards and changing the color of it and setting all these things. . . . We're going to the software manufacturer and we're getting one thing from this one, one thing from this one, some hardware here and some software here. And maybe you get it bundled, but you're going to make modifications for it to the system, and you're going to upgrade it, because the moment you get it home, it's out of date.

I understand the struggle of the manufacturer, but I don't understand what seems to be sort of a universal ignoring--not ignorance, but an ignoring--of basic security. That lets me know, as a security professional, that the consumer isn't standing pat and pounding the shoe on the table and saying, "Enough is enough." People just keep buying it, knowing it's going to be broken, knowing that that they're not a consumer; they're a guinea pig. And they still go out and buy it.

Can that be changed? If the manufacturers had liability, would we see greater security?

Yes, as long as there was that liability. You can't get a corporation to do anything until they're liable for it. I'll give you an example from Year 2000. Year 2000 became important to corporations. They spent a lot of money to fix a very real problem for them. Corporate boardrooms understood that, if they were proven negligent or they didn't exercise "due diligence," that it wasn't the corporation that would fail. Their individual personal fortunes were on the line vis-à-vis this corporation.

Well, that doesn't exist in a security realm. There's not that sense of urgency or that sense of corporate liability and boardroom personal liability. Now, that can change, but it requires the consumers to talk to their . . . elected officials. . . . You've got to make these clowns responsible for this stuff. I want software that works as well as my toaster--I set the dial, I push the button and it makes toast.

We're a ways away from that.

Yes, but it's not impossible. It has been done. There are people out there. . . . People at SRI, at Mitre and at Flight Recorder . . . have all built software that is robust. These people have built systems that actually work. Do they have problems? Yes, but they have a small set of problems, as opposed to this ubiquitous set of problems in their software . . .
read the full interview

Bruce Schneier
Author Applied Cryptography and Secrets and Lies: Digital Security in a Networked World.

Do you share the view of many people that . . . the people who build software just haven't taken security issues seriously?

I think it's more fundamental than the people who are building the software not taking security seriously. I think that software, the internet, has gotten more and more complex over the years. And complexity is anathema to security. There are a whole lot of reasons that complex software and complex systems are harder to secure. And even if you took security seriously, you couldn't do it. It would take too long, it would cost too much money, and it wouldn't be cost-effective. You couldn't produce a good product. We love complexity on the internet. We can play games, we can do cool things, we can have rich content, we can get audio, video, we can get instant chat. All of these things that make the internet exciting also make it insecure, and that's not going away. So it's more fundamental than not taking security seriously, because there's too much other stuff going on. . . .

Is it true that the Microsoft product in particular has been vulnerable to serious security risks?

Microsoft tends not to pay attention to real security. They pay lip service to it. But they're being smart. They know that security doesn't matter in the marketplace. They could take an extra two years and an extra whatever million dollars, and make sure Windows is secure, but they'll be two years late. They're much better off as a company putting it out early and grabbing market share. They know that. They're responding to the marketplace. If automobile manufacturers could do that, they would, too. If drug companies could do that, they would, too. A drug company knows it can't just put a product out there. There are liabilities, there are laws, there are regulations. There aren't any such regulations in the software industry. So it's much smarter to be insecure and fast, than be secure and slow.
read the full interview

photo of Howard Schmidt Howard Schmidt
Chief of Information Security, Microsoft Corporation

What do you see as the role of private sector companies like Microsoft in [improving the security of computer systems]? What sort of responsibility do they have in terms of corrections?

. . . The owners and operators of the critical infrastructure are the private sector now. Consequently, we do have this added challenge of insuring that the products we put out are more secure.

Generation after generation, we see that, not only with Microsoft, but with other vendors as well. There's a greater sensitivity to what effect something that one person does has on the other people downstream. Consequently, the communication, the sharing of information, the sharing of vulnerability information, and the reaction to identifying a problem and the response to it have increased significantly over the past few years.

If I buy a cigarette lighter, it'll have a little stamp on the bottom, showing that it was approved by some regulatory agency that sets standards. Yet I can buy software that will control my life, and it doesn't have to have something like that.

Yes, that's correct. . . . If I'm sitting at home with my son and just installed software to play some games, the level of security built for that would be far different than what we need to run an enterprise or a business. And those are the standards that we're looking at now, and trying to identify which security standards should be.

The thing that really plays into this is not even so much the hardware or the software involved. It's the configuration and the day-to-day maintenance of these things. Often . . . we see that the systems being exploited are the systems that have problems. Oftentimes, it's not that someone is exploiting something new. It's an old vulnerability that's been discovered, which someone hasn't applied the patch to. . . .

The critics will say that this stuff is so new and so complicated that your average user doesn't know about the bug, and doesn't know about the solution for the bug.

And they're right, in that respect. This has evolved over time. Many of these systems were operated and designed to be operated in an environment where there weren't threats of viruses and Trojans and hackers and crackers and things of this nature. So it has been an evolutionary process--not only by finding these things, but also by fixing them. And you're correct that the normal day-to-day user doesn't know about this. That's why many of the manufacturers now are coming up with automatic live updates, where every time you log in, it'll notify you that there's a security patch. All you have to do is click somewhere, and it'll go install it for you, and it doesn't require any great technical knowledge to fix it.

How proactive should a company like yours be in actually finding these bugs, and actually screaming off the rooftops to the people who are using your product, "You must fix this, or the following consequences might ensue"?

Very proactive. I think we've taken a really dramatic turn in the past year and a half to two years in that regard. As soon as we find out if there's a fix available, it's widely publicized, and there's screaming from the rooftops. . . . We also have the availability for people to sign up online for security alerts, so if something does come about, they can be alerted to it automatically. We also have media notification of certain things throughout that would be critical to everybody's use.

In the last few months, the critics are still saying that the big companies are being driven more by their marketing departments and sales imperatives than they are by security interests and people like you. What's running the business?

I totally disagree with that. Once again, in the past year and a half to two years, we've seen a dramatic shift in what's happening, to where products will not be shipped with known security problems, or without enhanced security. We've come full circle now. There used to be a time where a development process would take place that had very little to do with the security professionals. Now, not only do we have direct input into the products across the board, but also they're coming to us proactively. They're asking the security professionals to sit in on development committees to submit design change requests, and to say what additional security features we need, and to find a way to resolve bugs in the future. The . . . state we're looking to reach at some point is a state of self-healing, where if a vulnerability or a bug is found at some point, it's automatically . . . fixed for you the next time you log on.

. . . The onus is still on the person who buys it to close what he doesn't need, and therefore block the burglars. Is that going to change, or do you see that as a major weakness?

. . . That's something that people are looking at on a regular basis--how can we constantly continue to tighten [security] out of the box, while still allowing the functionality and the versatility that people want? It's a real challenge to try to balance the two. But it's not being driven by the marketing folks. It's actually driven by what people say they want as features in a particular product. . . .

How big a problem is this tension between convenience and security?

It's a big problem. What happens in most cases is that people want the ease of use and the convenience, without having to go through the extra layers of either adding additional passwords or doing something extra to get what they want. They want to be able to do it anywhere, any time, on any device, and that's always a challenge. Some people will look to circumvent that, because they find it too much of a problem to take the extra 30 seconds or 15 seconds to type in a password. So it's a real balance, and it's a real challenge. Part of it we can we can deal with by having good policies that we enforce. Lately, some of the new operating systems have electronic policies that require people to have strong passwords or they don't get to log in.
read the full interview

photo of richard power richard power
Editorial Director of the Computer Security Institute (CSI), San Francisco, CA, and author of Tangled Web: Tales of Digital Crime from the Shadows of Cyberspace. (Que, 2000)

Not so long ago, when you wanted to talk about security of corporations, the security of software, people like Microsoft would say, "We're not talking." Now, not only are they talking, but they're telling us that they're really doing something about it. How comforted can we be by the reassurances that we're getting from them now?

Well, that's a loaded question. Windows NT came out a few years ago. It was heralded as the secure operating system. And the hackers had a few good whacks at that tree, and fruit started falling off it right away. And now there are hundreds of vulnerabilities for NT. In fact, the hackers joke among themselves that "NT" stands for "Nice Try." So it's not that simple to slap some marketing hype on an operating system and say, "This is a secure operating system." It takes a lot more than that, and they haven't advanced internet security with their product.

But Microsoft is telling us that now they're taking it a lot more seriously, that with Windows 2000, security is a deal-breaker. Their security people say, "If we don't like the security components of Windows 2000, it ain't going out." Is it secure?

Well, ask that question six months from now, or a year from now. The tree will be given a few good shakes, and there'll be some fruit fall off it. There'll be vulnerabilities. There'll be exploits. How those vulnerabilities and exploits are dealt with is another question.

There's a debate in the security community about what kind of operating system we should have. NT/Windows 2000 is a closed system. You can't look at the source codes. That means only Microsoft and whatever hackers have succeeded in stealing it know how good it is. The good guys don't know how good the code is. The good guys can't look at the code and fix it, and adjust it to their own needs. With UNIX, for instance, the other major operating system, you can look at the code, and you can see where it looks like. You can see where the vulnerabilities are, and you can have your own smart people address that. So there are fundamentally different approaches there. Most internet security experts believe you should have an open system, so that everybody sees, and everybody is on the same playing field.
read the full interview



home · who are hackers? · risks of the internet · who's responsible · how to be vigilant · interviews
discussion · video excerpts · synopsis · press · tapes · credits
FRONTLINE · wgbh · pbs online

some photos copyright ©2001 photodisc
web site copyright WGBH educational foundation

SUPPORT PROVIDED BY