|
|
|
|
As Chief Technologist for the Government Accounting Office, Rhodes tests the
security of the government's computer systems. | |
|
|
You test the government's computers. How are they doing?
Well, they're not doing so well. If you've seen the reports that came
out from Congress, there was a grading system. The grading system went from A
to F, and the average grade for the government was a D, so that's poor
performance. . . . The message is that the government computer systems are
not as secure as they should be, and certainly not as secure as they could
be.
. . . How do you test these government computers? How do you know they're
so insecure?
. . . We'll do two sides--first we'll act as an external unknowing attacker.
We'll try and come in over the internet, or we'll try and come in over dial-up
lines or something like that. We go down the list of phone numbers and see if
we can get a modem or something, see if we can break in that way. The other
half of the test is that we try to be an insider. We try to be a disgruntled
mid-level employee, and see what we can get. . . . We're using standard
attacks. We never use anything extraordinary to try and break into the
computers through known flaws and vulnerabilities. We always have the most
success by being the insider. . . . The disgruntled employee is still the
biggest threat.
And when you try and break in [through] the internet, how successful are
you?
We're always successful, at least to some degree. Either we're successful
coming in over the internet, or we're successful dialing in. That's another
part that people tend to overlook. There are modems and regular dial-up lines
somewhere on that network in lots of the computers that are in their company or
their firm or government office or whatever. So we'll get in that way, or
something like that, even if we can't get through their firewall, because it's
very hard to keep track of modems. If you can exploit normal human tendencies,
normal human motivations, that's also a great way to get into an organization.
What is a realistic scenario illustrating the dangers of this
situation?
From the government standpoint, the biggest peril that I see from the work that
we've done would be denial of service. . . . A lot of people I deal with and a lot of people I work with and interface with at different agencies check their voice mailbox on their telephone once in the morning when they come in, and once in the evening when they go home, because they're not there during the day. And so they're working with some kind of way of getting their email during the day, either through their personal data system, or their
cellular phone, or their internet-enabled phone or whatever. So when that
denial of service hits, they can't communicate with anybody, and that's really
how the organizations work.
And presumably, in the government, it would also be possible to change
information.
Yes, absolutely. One of the criterion that we use when we go in is the ability
to alter, delete, and destroy information. You've signed an official document.
You're an official in an agency. I go inside, take the electronic version of
it, modify it, and put it back in the exact same place where it was in the
system before. Nobody knows it. And now they're operating as though that was
the official memorandum that came out, when it wasn't.
The hacker's view of the world is that a lot of these problems come down to
the comparison that we're driving cars on the internet, and that the wheels are
falling off. . . . It isn't designed properly, and they're not liable for the
flaws when the wheels do fall off.
. . . On one side, yes, there's a problem with how software manufacturers are
manufacturing their software. . . . If the software manufacturer didn't know
about the problems, that lets me know that they didn't do adequate testing.
You're supposed to test for out-of-boundary conditions and things like that,
especially when there are known holes, or this is a product that's notorious
for having holes in it. The other side is that when they do know and they
don't let anybody else know. That would be the equivalent of the struggle that
Firestone is going through over the tires. When did you know? What did you
know, and when did you know it?
. . . Corporate culpability is what really troubles me, because you're right.
They aren't brought to any court of law and told, "Well, your software is
insecure, and you have to make restitution for the money I lost." The other
side is that it is insecure, and we are working with an information
superhighway full of potholes and bad bridges or however you want to describe
it. . . .
Are we in a fairly unique situation, where you can buy software and even
some hardware that basically doesn't work very well, which can cause the user
damages, but the user has no restitution?
Let me give you an example from a firewall. The firewall comes to you in one
of two configurations. Either everything is turned on, and you allow all the
services, which means it's a firehose; or everything is turned off, meaning you
can't send anything through it, which means it's a firebrick.
Well, it's not a firewall. You construct the firewall. It's given to you and
you configure it. And the manufacturers are always able to fall back on that
argument of saying, "Well, you configured the system." We're all put in the
position of regarding our computers of being a shade tree mechanic, where we're
reconfiguring our cards and changing the color of it and setting all these
things. . . . We're going to the software manufacturer and we're getting one
thing from this one, one thing from this one, some hardware here and some
software here. And maybe you get it bundled, but you're going to make
modifications for it to the system, and you're going to upgrade it, because the
moment you get it home, it's out of date.
I understand the struggle of the manufacturer, but I don't understand what
seems to be sort of a universal ignoring--not ignorance, but an ignoring--of
basic security. That lets me know, as a security professional, that the
consumer isn't standing pat and pounding the shoe on the table and saying,
"Enough is enough." People just keep buying it, knowing it's going to be
broken, knowing that that they're not a consumer; they're a guinea pig. And
they still go out and buy it.
Can that be changed? If the manufacturers had liability, would we see
greater security?
Yes, as long as there was that liability. You can't get a corporation
to do anything until they're liable for it. I'll give you an example from Year
2000. Year 2000 became important to corporations. They spent a lot of money
to fix a very real problem for them. Corporate boardrooms understood that, if
they were proven negligent or they didn't exercise "due diligence," that it
wasn't the corporation that would fail. Their individual personal fortunes
were on the line vis-à-vis this corporation.
Well, that doesn't exist in a security realm. There's not that sense of
urgency or that sense of corporate liability and boardroom personal liability.
Now, that can change, but it requires the consumers to talk to their . . .
elected officials. . . . You've got to make these clowns responsible for this
stuff. I want software that works as well as my toaster--I set the dial, I
push the button and it makes toast.
We're a ways away from that.
Yes, but it's not impossible. It has been done. There are people out there. .
. . People at SRI, at Mitre and at Flight Recorder . . . have all built
software that is robust. These people have built systems that actually work.
Do they have problems? Yes, but they have a small set of problems, as opposed
to this ubiquitous set of problems in their software . . .
home · who are hackers? · risks of the internet · who's responsible · how to be vigilant · interviews
discussion · video excerpts · synopsis · press · tapes · credits
FRONTLINE · wgbh · pbs online
some photos copyright ©2001 photodisc
web site copyright WGBH educational foundation | |