READINGS
Searching for Safety
An excerpt from BEYOND ENGINEERING: How Society Shapes Technology
by Robert Pool.
To be published by Oxford University Press (June 1997). Reprinted with permission of Oxford University Press. Copyright (c)1997. All rights reserved. [Federal law provides severe civil and criminal penalties for the unauthorized reproduction, distribution, or exhibition of copyrighted materials.]

Looking back from the vantage point of a post-Three Mile Island, post-Chemobyl world, people sometimes wonder why we ever went ahead with nuclear power. Didn't anyone realize how dangerous it was? Didn't anybody think about the risks to people living close to nuclear plants? Didn't anyone consider the implications of generating so much nuclear waste? These things seem so obvious today.

But it was a different world in the years after World War II. For starters, the environmental consciousness so prevalent now did not exist, nor was there nearly as much concern about small risks to the public health. Indeed, people were cavalier about radioactivity to an extent that seems shocking today. As late as 1958, for example, the Atomic Energy Commission was exploding bombs above ground at its Nevada test site, and many tests in the 1950s created measurable amounts of radioactive fallout that landed outside the test area. After one test, people were warned to stay inside because the wind had shifted and carried a cloud of radioactive dust over a populated area. The exposures were probably too small to be dangerous, but it's impossible to imagine such tests being carried out in the 1990s.

The people in charge of the country's nuclear programs were not unconcerned with risk, however, they simply looked at it from a different perspective than most do today. No one worried much, for instance, about very small doses of radiation. They knew that humans are exposed to low levels of radiation from a number of natural sources, such as cosmic rays, and figured that a little extra didn't matter much. More importantly, forty and fifty years ago people had a tremendous faith in technology and in their ability to handle technology. They recognized that nuclear reactors were potentially dangerous and generated vast amounts of radioactive waste, but they assumed that nuclear scientists and engineers would find solutions. It wasn't even something they thought to debate-Can we make reactors safe? or, Can we figure out a way to dispose of nuclear wastes?-but instead was a silent premise that underlay all thinking about risk. Disputes concerning risk were always about "how," never about "can."

It is strange, even discombobulating, to read the descriptions and discussions of nuclear power from that era, so full of enthusiasm and so lacking in self-doubt. Could this really have been us only two generations ago? But what is perhaps even stranger is how little input the public had-or wanted to have-in any of it, Risk was considered a technical issue, one best left to the experts. And for the most part, the engineers agreed. Risk was a question whose answers could be found in facts and figures, in engineering calculations and materials testing. And so for twenty-odd years, the engineers and other experts debated among themselves how best to keep the nuclear genie safely in its bottle.

In the beginning everything was new. No one had built, much less operated, nuclear plants before, and the people worrying about safety were flying by the seats of their pants.

What types of things could go wrong? How likely were they? What were the best ways to prevent them? There was precious little information with which to answer these questions, and so, not surprisingly, the answers that people came up with depended on their biases and points of view. In particular, there were three main groups that influenced the debate on nuclear safety-the AEC, the Advisory Committee on Reactor Safeguards, and the national laboratories-and each had its own personality and predilections.

The AEC wished to promote nuclear power. That was the assignment given to it by the Atomic Energy Act, and if the progress toward a nuclear.future seemed too slow, members of the Joint Committee on Atomic Energy were sure to complain. At the same time, however, it was the AEC's responsibility to make sure that nuclear power plants were safe. There was a tension between the two goals-paying too much attention to safety would slow development, while paying too much attention to development might compromise safety-and many thought that safety was the loser. The AEC denied this, and indeed it did impose many regulations that the nuclear industry and the Joint Committee thought were needlessly conservative. On the other hand, the AEC regularly disregarded the advice of its own experts and settled on requirements much less stringent than they would have liked.

Among those experts, the most important were the members of a group of scientists originally assembled in 1947 to counsel the AEC on reactor safety. Known first as the Reactor Safeguards Committee and later as the Advisory Committee on Reactor Safeguards (ACRS), the panel offered the AEC an independent source of expertise on nuclear safety. Its influence grew after 1957, when Congress mandated that it review all applications to build and operate nuclear power plants. The AEC did not have to follow the ACRS's recommendations, but they generally carried a lot of weight, particularly in the early years. The members of the ACRS, being scientists, were most interested in the general question of how to ensure reactor safety, and they helped establish the philosophy underlying the AEC's safety regulations.

Where the ACRS was concerned with the big picture, the national laboratories were involved with the nitty-gritty of reactor engineering. What would fuel elements do if the core heated up past its normal operating temperature? What pressure could a reactor containment vessel be expected to withstand? Would an emergency core-cooling system behave as designed? The engineers at the national labs epitomized the technical approach to reactor safety. They asked questions and set out to answer them. By understanding all the technical details of a reactor, they believed that they could determine whether the reactor was safe. More than anything else, clear, unequivocal data were important.

All three groups-the AEC, the ACRS, and the national labs-were dominated by technological enthusiasts. They all wanted to see nuclear power plants built. But the enthusiasts were of two types, what the sociologist James Jasper calls professional and nonprofessional technological enthusiasts. At the ACRS and the national labs, the people were professional scientists and engineers. Their enthusiasm for nuclear power was tempered by an intimate familiarity with the technical details and a resulting appreciation of how difficult it was to assure safety. "The practice of engineering," Jasper says, "leads to a kind of artisanal pride as well as to technological enthusiasm: engineers are confident that they can build anything, but they want to take the time to build it properly." The AEC, on the other hand, was manned more by nonprofessionals, and the AEC commissioners were more likely to be lawyers or businessmen than scientists. Even the few scientists who served as commissioners generally came from areas outside nuclear physics. Thus the technological enthusiasm at the AEC tended to be a blind faith, unmoderated by the engineering realities.

In the early days of nuclear reactors, safety was assured with a simple scheme: putting the reactors far away from where people lived. The Hanford site, where plutonium was produced for the atomic bomb during World War II, was a large government reservation in a desolate area. If a major accident struck, the only people at risk would be the workers at the plant. In 1950, the Reactor Safeguards Committee formalized this approach with a rule of thumb linking the power of the reactor with an "exclusion radius"-the size of the area around the plant in which people could not live. The radius was calculated so that even in the case of a major accident causing the reactor to spit out all its fuel into the atmosphere, people living outside the radius would not get a fatal dose of radioactivity. The committee's rule of thumb was a blunt instrument. It made no attempt to look at how likely it was that such an accident would occur-or even if such an accident was physically possible-but merely assumed the worst and planned for that.

Quite naturally, the larger a plant was, the larger its exclusion radius had to be. For small, experimental reactors, a typical exclusion radius was a mile or two-a workable size. But for the larger plants that would generate electricity for a city, the exclusion radius would be ten times that. And that was a problem. If a reactor had to be surrounded by empty land for ten or twenty miles in all directions, it would be impossible to put one anywhere near a city. A utility would have to pay to transmit its nuclear electricity long distances to its customers. Furthermore, the cost of buying the land for an exclusion zone would add even more to the price. If nuclear power were to be economically practical, a different way to assure safety would have to be found.

Within a year or so, reactor designers at General Electric had come up with a solution: They would put their reactor inside a large steel shell-a "containment" building-which would keep the reactor's radioactive materials from escaping in case of an accident. The safeguards committee accepted this idea, and containment became a standard feature of nuclear plants, beginning with a test reactor that GE built in upstate New York. With containment, the AEC was willing for reactors to be built much more closely to the surrounding population, although the commission did insist on keeping a certain distance between them.

To build a containment vessel, the engineers had to figure out the worst possible accident that might occur-the so-called maximum credible accident-and then design the vessel to withstand it. And so began a pattern that would characterize thinking about safety for decades to come. Someone-the reactor designers, the ACRS, the researchers at the national labs-would come up with a scenario for what might go wrong in a reactor, and then the engineers would be asked to design safety features to make sure such an accident didn't jeopardize public safety. In general, little thought was given to how likely such an accident was to occur. If it was physically possible, then it had to be taken into account. And in this approach lay the seeds of a tension between the AEC and its technical advisers that would grow with the size of the nuclear program. The AEC, whose job it was to see that nuclear power plants got built, had to draw the line somewhere. It did not wish to put off building plants while every possible thing that could go wrong was addressed. The ACRS and the national labs did not feel the same pressures. Their natural inclinations were to answer each technical issue as completely as possible. This struck many in the nuclear industry as extreme. A 1966 issue of Nucleonics quoted one industry representative's opinion of the ACRS: "Like Rickover, they say, 'prove it.' So you do, and they say, 'prove that'-and you can go on doing arithmetic forever. . . . They should have to prove the justification of their question. We always have to prove-they can just think, opine. Maybe they should be asked 'where?' 'when?' 'why?" When the AEC sided with industry, as it often did, the nuclear scientists and engineers in the ACRS and national labs became frustrated with what they saw as a too-careless attitude toward safety.

As the nuclear industry began to sell more and more reactors in the 1960s, the AEC and its advisers struggled to address a host of safety issues. What, for example, were the chances that a reactor's pressure vessel would burst and release large amounts of radioactivity into the containment, and would the containment hold if the pressure vessel did burst? The pressure vessel was a large steel unit that surrounded the reactor core of a light-water reactor. It held the high-pressure water that was fed through the core to carry heat away from the fissioning uranium atoms. If the pressure vessel ruptured, it would release a tremendous amount of highly radioactive-and highly dangerous-material, and it might even send pieces of steel flying that could break through the containment. The issue has remained a tricky one because although engineers know the capabilities of normal steel quite well, they're less comfortable with steel that has been bombarded by neutrons for a number of years. Neutron bombardment can make steel brittle and liable to break under conditions that it would normally be quite strong.

But the most difficult-to-resolve issues centered around the so-called loss-of-coolant accident, or LOCA. If the supply of cooling water to a reactor core stops for some reason-a break in a pipe, for example-then part or all of the reactor core can come uncovered, and without water surrounding the core there is nothing to carry off its intense heat. In case of a loss-of-coolant accident, reactors are designed to shut down automatically-scram-by inserting all the control rods into the reactor core. The control rods absorb neutrons and stop the chain reaction. Unfortunately, the core stays hot even after the chain reaction is killed since once a reactor has operated for a while, radioactive "fission products" build up in the fuel, and they decay spontaneously, generating heat. The control rods do nothing to prevent this spontaneous decay and its raising of the core's temperature. The result is that without cooling, the fuel can get so hot that it melts, even if the chain reaction has been shut down.

Before 1966, the AEC assumed that even if the flow of coolant stopped and the reactor core melted, releasing radioactive gases, the containment building should keep the gases from escaping. But in that year the AEC realized that in a large reactor-1,000 megawatts and more-the fuel could get so hot after a loss-of-coolant accident that it might bum through the concrete floor and into the earth. This was dubbed the "China syndrome," a humorous reference to the direction the fuel would be taking, but not, as some would later think, an allusion to where the fuel would end up. Actually, it seemed unlikely that the fuel would descend much more than 100 feet into the earth before solidifying-and perhaps much less. But whether the fuel made it to China or not was beside the point. Such an accident could breach the containment and release radioactivity into the surrounding area. Even more worrisome was the possibility that a loss-of-coolant accident might cause a break in the containment somewhere besides the floor. After the hot fuel had melted through the pressure vessel but before it had gone through the floor of the containment building, it might somehow generate enough gas pressure to blow a hole in the containment above the ground. This would release much more radioactivity into the atmosphere than would the China syndrome.

The ACRS and AEC agreed that a way had to be found to keep a loss-of-coolant accident from causing a breach in the containment, but they disagreed on the tactics. Several members of the ACRS wanted to explore ways to keep the containment intact even if the reactor core melted. It might be possible, for instance, to build a large concrete structures "core-catcher"-underneath the containment building. If the core melted through the floor of the containment, it would be captured and held here. But the AEC, under pressure from the industry to not delay the building of nuclear plants by adding extra requirements, wasn't interested. Instead, it would rely on two other strategies. It would demand improved design of cooling systems to make loss-of-cooling accidents less likely. And it would focus on preventing a core meltdown if the flow of coolant stopped. In practice, this second strategy meant an increased emphasis on emergency core-cooling systems, devices that could quickly flood the reactor core with water in case of a loss-of-coolant accident.

This may have seemed like the most direct route to solving the problem of a core melt, but it would make life much more complicated for the AEC and the ACRS. Now the safety of a reactor depended explicitly on engineered safeguards performing correctly in an emergency. In the early days, safety had come from distance. Then it relied on the ability of a piece of steel to stand up to certain pressures. With the large nuclear plants being built in the mid-1960s, neither of these was possible, and the engineers had to guarantee that, no matter what else happened, the core wouldn't melt. To this end, they designed more and more sophisticated systems and added backups to the backups. But would they work as planned? No one knew for sure. The forces generated in a reactor by a loss of coolant, sudden heating, and then a flood of cold water from the emergency core-cooling system would be violent and unpredictable. What would happen, for example, to the fuel rods when they were heated and cooled so brutally? It would take a great deal of testing and studies to find out.

At the time, however, the AEC was cutting back on research into such basic safety questions as how fuel rods would perform in a loss-of-coolant accident. Milton Shaw, the head of the AEC's Division of Reactor Development and Technology, was convinced that such safety research was reaching the point of diminishing returns. An old Rickover protege, Shaw saw light-water reactors as a mature technology. The key to the safety of commercial power plants, he thought, was the same thing that had worked so well for the navy reactor program: thick books of regulations specifying every detail of the reactors, coupled with careful oversight to make sure the regulations were followed to the letter. Shaw rejected the doomsday scenarios that the ACRS had been working with as academic fantasies. The worst-case loss-of-coolant accidents, for example, envisioned a major cooling pipe breaking in two. It was a scientist's approach to safety: figure the maximum credible accident, prepare for it, and everything else will be automatically taken care of. But Shaw contended that nuclear accidents were more likely to be the result of little breakdowns that snowballed. Take care of the little things, and the big things would take care of themselves-that was Shaw's safety philosophy, and he was in charge of all the AEC's safety research.

Because Shaw believed the light-water reactor to be a mature technology, he had turned most of his attention to what he thought would be the next-generation reactor: the breeder. He pushed the AEC to turn its attention in that direction and leave the light-water reactor to the manufacturers and the utilities. And he applied much of the AEC's spending on safety to the breeder. In 1972, for example, with a hundred light-water reactors either built or on order in the United States and no commercial breeders, Shaw split his $53 million safety budget down the middle-half for light-water and half for the breeder.

Shaw's autocratic personality and attitudes toward safety quickly estranged many of the personnel working on safety at the national labs. Like the scientists on the ACRS, the engineers at the labs could see many things that might go wrong in a reactor, and they thought it important to track them down. It was in their nature to want answers, and they couldn't understand how Shaw could ignore all these potential problems with the reactors. Many came to believe that Shaw was consciously suppressing safety concerns to protect the interests of the reactor manufacturers. Eventually, this dissension would explode into a very public controversy.

It began in the spring of 1971 when several men associated with the Union of Concerned Scientists, originally founded to oppose various weapons such as anti-ballistic missiles, challenged the licensing of a nuclear plant in Plymouth, Massachusetts. In the argot of the AEC, they became "intervenors," which meant they were full parties in the licensing hearings. The three intervenors, Daniel Ford, James MacKenzie, and Henry Kendall, began looking for ammunition with which to oppose the reactor, and they soon settled on the emergency core-cooling system after discovering what seemed to be damning evidence against it. A few months earlier, the AEC had performed scale-model tests of an emergency core-cooling system and it had failed miserably. The emergency cooling water flowed out the same break in a pipe which allowed the original coolant to escape. Because the tests were not very realistic, no one expected a real emergency core-cooling system to perform the same way, but the failure underscored a striking fact: there was no hard evidence that such a system would work the way it was supposed to.

As Ford, MacKenzie, and Kendall delved deeper into the physics and engineering of emergency core cooling, they decided to go directly to the source of much of the information: Oak Ridge National Laboratory. There they were surprised to find many of the researchers sympathetic to their arguments. The Oak Ridge scientists did not believe nuclear reactors were dangerous, but neither did they believe there was enough evidence to deem them safe. They were upset with the AEC's level of funding for safety research, and they were particularly incensed that funding for a series of experiments on fuel rod failure during a loss-of-cooling accident had been canceled in June 1971. The Oak Ridge researchers spoke at length with the three intervenors and sent them home with stacks of documents.

Meanwhile, concerns about the emergency core-cooling system were surfacing at hearings for several other reactors around the country, and, hoping to calm the criticism, the AEC decided to hold national hearings on the issue. The hearings began in January 1972 and ran on and off until July 1973, and they were an embarrassment for the commissions. A number of engineers from the national laboratories testified that their concerns about safety had been ignored or papered over by the AEC, and Shaw himself came off as arrogant, overbearing, and unwilling to listen to criticisms. By the end of the hearings it was clear that the AEC could not guarantee that emergency core-cooling systems would work correctly in the worst-case scenario, the complete break of a pipe supplying cooling water. Was that important? The AEC thought not, declining to make any major changes to its rules governing the emergency core-cooling system. But the revelations created an impression among many that the AEC's reactor development division was ignoring its own experts and catering to the wishes of the nuclear industry. In May 1973 the AEC took responsibility for safety research on light-water reactors from Shaw's office and placed it in a newly formed Division of Reactor Safety Research. Shaw resigned a few weeks later. The following year the AEC itself was dismembered, its regulatory arm being reincarnated as the Nuclear Regulatory Commission and its research and development activities handed over to the Energy Research and Develo ment Administration, later part of the Department of Energy.

The AEC was hoist with its own petard. It had based its safety thinking on maximum credible accidents because they were a convenient analytical tool. But they had come to take on a life of their own, as many of the safety experts came to see their job as making sure that a reactor could survive a worst-case accident without risk to the outside. Because safety had been presented as an absolute, there was no way to draw a line and say, "Past this we won't worry about any more what-ifs." And when Shaw did try to draw a line, he was an easy target for people opposing nuclear power plants since they could always point to questions that had not been answered. It made no difference how unlikely the maximum credible accident might be; the logic of the AEC's approach to safety, taken to its extreme, demanded that such an accident be accounted for, either prevented from happening or kept from doing any damage outside the nuclear plant.

If any lesson can be drawn from these first two decades of thinking on nuclear safety, it is the difficulty of trying to ensure safety on the run. Nuclear power was evolving rapidly. in the mid- and late 1960s, the size of the reactors being offered jumped yearly as reactor manufacturers competed to offer what they thought would be ever-increasing economies of scale. The part of the plant outside the reactor never became standardized because each utility that built a nuclear plant chose its own designers, suppliers, and builders. Meanwhile, the AEC's ideas on safety were evolving too. Each year the commission demanded something different and something more than the year before. The result was a confused and confusing situation, with the nuclear industry complaining it was being harmed by overzealous regulation and the AEC's advisers at the ACRS worrying that too little was being done to make sure the plants were safe. Often it seemed the AEC was,trying to split the difference. It would demand extra precautions from industry, but only if those precautions were not so expensive or time-consuming as to slow the growth of nuclear power. It was a crude sort of cost-benefit policy-making: by setting safety policy so that the yelps from industry and the ACRS were approximately equal in volume, the AEC could balance the benefits of extra safety measures against their cost.

In the late 1960s, with the appearance of the China Syndrome, people in the nuclear community began to realize that they needed a new way to think about reactor safety. It no longer seemed possible to assure absolute safety-to guarantee that in case of a worst-case accident no radiation would escape to threaten the public. Safety now depended on the performance of active safety features such as the emergency core-cooling system. If they failed, there was a chance that the containment could be breached and radioactive materials get out. "We then had to change the basis of our claim that reactors were 'safe,"' recalls Alvin Weinberg, then the head of Oak Ridge National Laboratory. "Instead of claiming that because reactors were contained, no accident would cause off-site consequences, we had to argue that, yes, a severe accident was possible, but the probability of its happening was so small that reactors must still be regarded as 'safe.' Otherwise put, reactor safety became 'probabilistic,' not 'deterministic.'

The approach that the nuclear community gradually adopted was called probabilistic risk assessment. It evaluated risk by taking into account both the probability of a certain accident occurring and the consequences of that accident. Thus an accident that was expected to occur only once in a million years of reactor operation and that might kill a thousand people could be treated as equivalent to an accident calculated to happen once in a thousand years with only one death expected. Each worked out to one expected fatality per thousand years of reactor operation. This was something that engineers could appreciate. Instead of endless arguments over whether some potential mishap was worth worrying about, they could establish a numerical target for reactor safety-say, one expected death from radiation release for every thousand years of reactor operation-and set out to reach that goal. To pronounce a reactor safe, engineers would not have to guarantee that certain accidents could never happen, but only to ensure that they were very unlikely.

The probabilistic approach had the advantage of putting the maximum credible accident to rest, but it had several weaknesses. One, it explicitly acknowledged that major accidents could happen. Although it was fiction to claim that even a maximum credible accident posed no threat to the public, it was a useful fiction. Nuclear plants could be considered "safe" in a way the public understood: engineers said that major radiation-releasing accidents were impossible. With probabilistic risk assessment, the engineers now said that major accidents were indeed possible. And although such an accident might be a one-in-a-million shot, much of the public didn't find this comforting.

But the major weakness of probabilistic risk assessment was a practical, technical one. Calculating the probabilities for various accidents was next to impossible. In theory, it was done by identifying the possible chains of events that might lead to an accidents break in a pipe, the failure of a sensor to detect it, an operator throwing the wrong switch, a back-up generator not starting up-and estimating how likely each of the events in the chain was. By multiplying the probabilities for the separate events, one calculated the probability that the whole chain of events would occur. Then by adding the probabilities for various chains of events, one arrived at the probability for a given accident. But because nuclear power was still a new technology, identifying the possible chains of events and estimating the probability of each event involved a good deal of guesswork. The reactor designers could list all the sequences they could think of that might lead to an accident, but inevitably they would overlook some, perhaps many, of them. And while experience in other industries might help in estimating how likely it was that a pipe would break or a pump would fail, much of the equipment in a nuclear plant was unlike that found anywhere else and it operated under unique conditions. Inevitably, until there was much more experience running nuclear power plants, much of the probabilistic risk assessment would be done by guess and by gosh.

This became brutally clear after the release of the so-called Rasmussen report, a study released in 1974 that attempted to calculate the likelihood of a major nuclear accident. The AEC had hired MIT nuclear engineering professor Norman Rasmussen to put the study together, and had asked him to do it quickly. The commission wished to fight rising concern over the safety of nuclear power with an objective report, which the AEC fully expected would validate its position. Rasmussen and coworkers drew data from two reactor sites that were taken as representative of U.S. nuclear plants in general, and then analyzed the data extensively. Much of what the researchers did was extremely valuable. By envisioning various ways an accident might happen, for instance, they identified types of accidents that had been ignored in the past, such as mid-sized loss-of-coolant accidents, in which the coolant was lost from the reactor core gradually instead of suddenly, as would happen with a major break in a pipe or failure of the pressure vessel. The report focused attention, for the first time, on the types of accidents that could be caused by the coming together of several relatively minor mishaps, none of which would be a problem by itself. But for obvious political reasons, the AEC was not interested in highlighting this part of the Rasmussen report. Instead, in its public discussions, it focused on a much weaker and more speculative part, the estimates of the probabilities of accidents in nuclear plants.

Indeed, even before the study was released, AEC officials were boasting that it showed a core meltdown could be expected only once in every million years of reactor operation, and that a major accident only once in one to ten billion years of reactor operation." When the report was officially made public, the AEC summarized its findings by saying that a person was about as likely to die from an accident at a nuclear power plant as from being hit by a meteor.

The Rasmussen report had its desired effect. The media reported on it widely, accepting its conclusions at face value. But a number of scientists took a closer look at it and didn't like what they saw. Some of them were committed skeptics or opponents of nuclear power, but a particularly convincing criticism came from a respected, nonpartisan source-the American Physical Society, the major professional organization for physicists. The APS assembled a study group to review Rasmussen's work, and they returned a mixed appraisal. Although they found Rasmussen's use of probabilistic risk assessment valuable in pinpointing ways that accidents might happen, they had serious reservations about how he had estimated the likelihood of major accidents. The group concluded that the report contained a number of flaws that made its conclusions much more optimistic than they should have been. Nuclear power might be safe, but few believed that the Rasmussen report proved it.

Nothing the critics said, however, could match the dramatic repudiation of the report delivered less than five years later in Middletown, Pennsylvania. On March 28, 1979, Unit 2 of the Three Mile Island nuclear plant had a major accident. The reactor core melted partially, and some radioactivity was released into the atmosphere, although most of it stayed inside the containment building. The probabilistic risk assessment in the Rasmussen study had concluded that a meltdown in U.S. light-water reactors could be expected only once in every 17,000 reactor-years (not once in every 1,000,000 reactor-years as the AEC had advertised before the report was released). With only the few dozen nuclear plants that were in operation at the time, the United States might expect to go centuries without a meltdown. It was instead only years.

The TMI accident triggered an orgy of rethinking on nuclear safety. It shocked the public, but it shocked the nuclear establishment equally. Roger Mattson, who directed the Nuclear Regulatory Commission's division of systems safety, described it this way:

There is a difference between believing that accidents will happen and not believing it. The fundamental policy decision made in the mid-1970s-that what we had done was good enough, and that the goal of regulation ought to be to control the stability of the licensing process-that the body of requirements was a stable and sufficient expression of "no undue risk" to public health and safety-and if new ideas come along we must think of them as refinements on an otherwise mature technology-that's been the policy undertone. That's the way I've been conducting myself . . . . It was a mistake. It was wrong.

More than any other event, the partial meltdown at Three Mile Island has shaped recent thinking on risk-and not just in the area of nuclear power. The lessons that emerged from analyzing what went wrong at TMI have application to almost any risky, complex technology.

Before Three Mile Island, most of the AEC's and NRC's safety efforts had been aimed at the equipment. The prevailing philosophy was, make sure that everything is designed, built, and maintained properly, and safety will follow. And indeed, a piece of malfunctioning equipment did play a key role in the accident. But the Kemeny Commission, the presidential board that investigated TMI, concluded that problems with equipment were only a small part of it. More worrisome was the performance of the operators running the reactor. They had been poorly trained and poorly prepared for an emergency of the type that caused the accident. Not only did they not take the correct steps to solve the problem, but their actions made it worse.

The shortcomings of the operators were, however, just part of a more general failing. The Kemeny Commission argued that running a nuclear power plant demands different sorts of management and organizational capabilities than operating a fossil-fuel plant. Utilities tend to run coal- and oil-fired plants at full power until something breaks, then fix it, and start up again. There is little concern about safety, little concern about preventive maintenance, little concern about catching problems before they happen. This works fine for these plants, which are relatively simple and endanger no one when they break down. But many utilities-not just the one running TMI-had carried over these attitudes into the management of their nuclear plants, and that didn't work. Successfully operating a nuclear plant demanded an entirely different institutional culture (a culture we'll examine more closely in chapter 8).

But perhaps the most important lesson from TMI was the realization that big accidents can be triggered by little things. Until then, much of the thinking on nuclear safety had focused on responding to major failures, such as a break in a large pipe. By piecing together the chain of events that led to the TMI accident, investigators showed how a number of seemingly minor foul-ups can create a major disaster.

It began at 4 a.m. when the pumps sending water into the reactor's steam generators shut down as a result of a series of human errors and equipment malfunctions. The TMI reactor was a pressurized-water type, which meant there were two main systems of water pipes. One, the reactor cooling system, carried water through the reactor, to a steam generator, and back to the reactor. This cooling water carried heat away from the reactor core and used that heat to boil water in the steam generator. The second system of pipes sent water into the steam generator, where it was turned into steam, which powered a turbine. After passing through the turbine, the steam was condensed into water, which then was routed back to the steam generator. It was this second system that stopped working when the pumps shut down.

With the secondary system shut down, the primary cooling system had no way to pass on the heat from the reactor core, and the water in that primary system began to heat up and expand, increasing the pressure in that system. As planned, the reactor immediately shut down, its control rods slamming down into the reactor core to absorb neutrons and kill the chain reaction. At about the same time, an automatic valve opened to relieve the pressure in the primary cooling system. So far the plant had responded exactly as it was supposed to. But once the relief valve had released enough pressure from the coolant system, it should have closed again. And indeed, an indicator in the control room reported that a signal had been sent to close the valve. But, for some reason, the valve did not close, and there was, no way for anyone in the control room to know that. So for more than two hours, first steam and then a mixture of water and steam escaped through the open valve. This caused the pressure to continue dropping in the primary cooling system. After a few minutes, the pressure had dropped enough to trigger the startup of high-pressure injection pumps, which sprayed water into the system. But because the operators misread what was happening in the reactor, they shut down one pump and cut back on the other to the point where it could not make up for the water being lost as steam through the open relief valve. They thought they were doing what they had been trained to do, but they were making matters worse. Gradually, the coolant in the primary system became a turbulent mixture of water and steam.

Then, a hour and a half into the accident, the operators decided to turn off the pumps circulating the coolant through the reactor to the steam generators and back. Again, because the operators didn't understand exactly what was going on inside the reactor, they believed they were following standard procedure, but cutting off the reactor pumps removed the last bit of cooling action in the core. Soon half the core was uncovered, and its temperature shot up, melting some of the fuel and releasing highly radioactive materials. Some of the radioactive gases escaped from the containment building, but fortunately not enough to threaten anyone in the surrounding areas. Finally, nearly two and a half hours after the accident started, someone figured out that the pressure relief valve had never shut and closed it off. It took another 12 hours to reestablish cooling in the core and start to bring the system back to normal temperatures.

It was a chain of events that no one performing a probabilistic risk assessment could have imagined ahead of time. Upon hearing the chain of events described, it's easy to tly to blame-they did, after all, overlook the open pressure relief valve, turn down the injection pumps, and shut off the reactor coolant pumps. But there was plenty of blame to go around. The designers had failed to include anything in the control room to show whether the pressure relief valve had closed; the one indicator told only that a signal had been sent to close it. The NRC had known about a similar accident 18 months earlier at another reactor, where a relief valve had stuck open, but it had not notified other nuclear plants that the valve might be a problem. The plant management had operated the reactor in such a way that little problems seldom got corrected, and these little things created a sloppiness that contributed to the accident. Prior to the accident, for instance, there had been a steady leak of reactor coolant from a faulty valve; then during the accident, when the operators saw abnormal readings that could have told them the pressure valve was open, they thought the readings were caused by the leak instead.

But more than anything else, the culprit was the complexity of the system. It created a situation where a number of seemingly minor events could interact to produce a major accident, and it made it next to impossible for the operators to understand what was really going on until it was too late.

The political scientist Aaron Wildavsky suggested that because of this complexity and the nteract, adding safety devices and procedures will at some point actually decrease safety. The TMI accident offers a number of examples of how this works. The control room, for instance, had more than 600 alarm lights. Each one of them, considered by itself, added to safety because it reported when something was going wrong. But the overall effect in a serious accident was total confusion, as so many alarms went off that the mind could not easily grasp what was happening.

Charles Perrow, a sociologist at Yale, has taken this sort of reasoning one step further and argued that such complex, tightly interconnected technologies as nuclear power are, by their nature, unsafe. With so many components interacting, there are so many different ways an accident can happen that accidents are an inevitable feature of the technology-what Perrow calls "normal accidents." The technology cannot be made safe by adding extra safety systems, for that would only increase its complexity and create more ways that something could go wrong.

It's impossible to do justice to Perrow's complex, tightly argued thesis in just a few sentences, but in essence he argues that the requirements for successful control of certain technologies contain inherent contradictions. Because what happens in one section of a nuclear plant can dramatically affect events in others, some central control is needed to make sure that actions in one place don't cause unanticipated consequences in another. This control can might be in the form of a central management that approves all actions or in the form of a rigid set of rules governing actions throughout the plant. On the other hand, because the technology is so complex and unpredictable, operators need the freedom to respond quickly and imaginatively to special circumstances as they arise. Both rigid central authority and local discretion are needed, and, Perrow says, it is impossible to have both. Thus a nuclear plant will always be vulnerable to one type of accident or another-either one caused by a failure to adapt quickly to an unanticipated problem, or else one created by not coordinating actions throughout the plant.

Perrow argues that a number of technologies besides nuclear power face the same inherently contradictory demands: chemical plants, space missions, genetic engineering, aircraft, nuclear weapons, and the military early warning system. For each, he says, accidents should be considered not as anomalies but a normal part of the process. Their frequency can be reduced by improved design, better training of personnel, and more efficient maintenance, but they will be with us always. Perrow goes on to suggest that society should weigh the costs of these normal accidents against the benefits of the technology. For chemical plants, the costs of accidents are relatively low and are usually bome by the chemical companies and its workers, while the cost of shutting down chemical plants would be rather high. There is nothing to replace them. But nuclear power is different, Perrow says. The costs of a major accident would be catastrophically high, while the cost of giving up nuclear power would be bearable. Other ways of generating electricity could take its place.

So far, of course, that hasn't happened. But what has happened is that the way people think about safety has changed, particularly in the nuclear industry, but in others as well. No one believes any longer that it is possible to engineer for complete safety, to determine the maximum credible accident and then assure that it won't threaten anyone. The best that can be done is to try to make dangerous accidents very unlikely.

People have also come to appreciate how complexity changes the risk equation. It makes risk harder to calculate by making it difficult to understand all the ways that things can go awry. But equally important, complexity can amplify risk. The more complex a technology, the more ways something can go wrong, and, in a tightly coupled system, the numbers of ways that something can go wrong increases exponentially with the number of components in the system. The complexity also makes a system more vulnerable to error. Even a tiny mistake may push the system to behave in strange ways, making it difficult for the operators to understand what is happening and making it likely they'll make further mistakes.

Since Three Mile Island, the Nuclear Regulatory Commission has tried to assure safety by looking to the one truly successful model in the history of nuclear power: Rickover's nuclear navy. This has led to ever more rules and specifications covering ever more minor matters, along with mountains of paperwork intended to assure the quality and design of the different components of a nuclear power plant. By all accounts, this has rid the industry of much of the sloppiness that plagued it before TMI.


home | did you know? | maps & charts | interviews | readings | glossary | reactions | faqs | join in
web site copyright WGBH educational foundation
PBS Online
SUPPORT PROVIDED BY