Why Agencies Get Things Terribly Wrong
By James Kwak
There has been a lot of criticism of regulatory agencies in the past couple of years, from the Office of Thrift Supervision and the Securities and Exchange Commission (Madoff who?) to the Minerals Management Service. But most of the people in these agencies are not evil; on the contrary, I believe (without a ton of evidence in support at the moment) that a majority are conscientious, hard-working, and civic-minded, and a significant minority are actually quite good at what they do. So why do they get things so wrong?
A few days ago, Leslie Kaufman of The New York Times wrote an article describing how the Fish and Wildlife Service “signed off on the Minerals Management Service’s conclusion that deepwater drilling for oil in the Gulf of Mexico posed no significant risk to wildlife.” This sounds like classic incompetence, or corruption, or both.
But the report itself, it seems, was not so far off, at least in its details. The report assessed spills of up to 15,000 barrels of oil. As Kaufman paraphrases,
“In its 71-page biological assessment, the Minerals Management Service concluded that the chances of oil from a spill larger than 1,000 barrels reaching critical habitat within 10 days could be more than 1 in 4 for the piping plover and the bald eagle, as high as 1 in 6 for the brown pelican and almost 1 in 10 for the Kemp’s ridley sea turtle. When the model was extended to 30 days, the assessment predicted even higher likelihoods of habitat pollution. . . .
“‘Heavily oiled birds are likely to be killed,’ the assessment said.”
Fifty-one days after the well explosion, the amount of oil spilled is probably somewhere between one and three million barrels.
So what happened? Probably two things.
First, someone at MMS decided that they would only model spills of up to 15,000 barrels, even though BP’s permit to drill the well estimated a worst-case scenario of 162,000 barrels per day.
Second, the local FWS office “considered that any likelihood under 50 percent would not be enough to require the protections of [the] office.”
This second decision seems incredibly stupid. After all, why 50 percent? And let’s say you review four reports for four different things, each of which says there is a 25 percent chance of something bad happening. At that point, the chances of something bad happening rise to about 70 percent. You always rule out the possibility of stupidity at your peril. But another possibility is that FWS had been told it could only act in cases where the risk was greater than 50 percent, because some political appointee had decided that that was the meaning of some statute.
In other words, it doesn’t take a lot of meddling to produce these terrible results. One political appointee or middle manager sympathetic to industry sets the parameters of a study. Another political appointee sets the threshold for action. And look what happens.
It’s like saying that all regulations have to pass cost-benefit analyses, and then setting the rules for how to do those analyses. (In general, it’s much harder to measure the benefits of regulation–because it involves the value of, say, clean air–than the costs of regulation.) Like saying you have to discount future lives at a rate of 7 percent per year, which makes it possible to justify putting any amount of toxins into the ground (or the plastic products we eat out of), because the health effects are decades away.
We all know what the results look like.