Hey, Where’s My Free Advance Copy of Superfreakonomics?
Just kidding. I don’t have time to read it anyway (nor am I all that interested).
In case you’ve missed it, there has been an enormous controversy (by blogosphere standards) over a chapter in Superfreakonomics (to be released tomorrow, I think) on climate change, carbon reduction, and geo-engineering. Brad DeLong has the most coverage (I believe this was his first post; read backwards from there), including links to some people who are supportive of the book. The summary is that a number of people have accused Levitt and Dubner of saying silly things about climate change (bad), accepting an “expert’s” opinion without doing due diligence (more bad), and possibly distorting the opinion of another expert (very bad), with the assumed goal of being contrarian and controversial. Levitt and Dubner disagree. Paul Krugman has some interesting thoughts on the dynamics involved.
This did, however, make me think a little about the difference between blogs and books. [Note: After finishing this post -- which is over 1,300 words -- I realized it is not as interesting as I thought it would be. So feel free to go do something else fun.]
On the Internet, it is fairly common for people to cite sources without investigating them thoroughly. How do I know this? Well, several times I have clicked through people’s footnotes and found that the sources they cited did not in fact say what they were purported to say. For example, I was writing a post about health care and the curious fact that Republicans have converted themselves into defenders of Medicare. I found an article claiming that John Boehner had, while George W. Bush was president, endorsed exactly the types of Medicare spending reductions that are in the bills in Congress. Aha! I thought. But when I clicked through to the source, it was an anodyne press release praising Bush’s entire proposed budget, of which the Medicare spending reductions were one of no doubt thousands of line items. I actually thought for half a second about just citing the article, because that would be convenient, but then I realized that of course I can’t do that. It’s sloppy not to check the source; it’s dishonest to check the source and then pretend you didn’t.
And so I’ve been bothered by a four-part takedown of Megan McArdle that Thomas Levenson wrote. (Here’s part four, via DeLong.) Here’s the part that bothers me:
“Here’s the deal: in science journalism — in any attempt to write about technical material for the public — it’s not enough simply to read an abstract or even the whole piece and call it done.
“You can’t just read the paper and assume –unless you are genuinely expert in that subdiscipline of the field you wish to cover, and often not even then — that you know what its authors’ actually have done and what it means. [...]
“So what you do if you are a properly trained and ethical science journalist/popular writer is read first, of course, with care and attention to all the places you either or both don’t understand and/or get the sense of an important subtlety…and then you call.
“You talk to someone, lots of someones if necessary.
“You get people in the field to explain what they are doing; you allow yourself to appear dumb to yourself; [...] you ask simple questions, and then more complicated ones, until you and your interlocutor agree you’ve got what you need.
“You have to persist — and if someone says check out this or that, you do, looking up the papers if necessary and then calling back…and so on. You do what a good reporter does: you cover the story.”
Why does it bother me? Because I do what Levenson accuses McArdle of — I cite papers having read through them (and sometimes I even skim certain bits), without talking to the author, and certainly without talking to other people in the field (although I may read their papers). (Levenson, by the way, is a professor of science writing at MIT.) I think I’m more careful than the average blogger, although two of my habitual critics are sure to disagree. If I don’t know what a word means, I look it up; if someone’s explanation doesn’t make logical sense to me, I go over it until it does (or I don’t use it). I check people’s sources (if I can do it online — I don’t put off blog posts so I can go to the library) before I repeat their facts. But I do write about things I’m not an expert in, and I click Publish before becoming an expert.
For the most part, I think this just comes with the territory. I tend to have a utilitarian moral sensibility, and I believe I am doing more good than harm here, even if I make a mistake here or there. The Internet does have the nice property of exposing people’s mistakes pretty quickly, often in the comment stream. But there is the problem that Megan McArdle has a much bigger audience than Thomas Levenson, and, like an urban myth in a mass email, error can spread much faster than truth can catch up with it. And I have the nagging sense that I have set my standards where they are convenient for me, not where they are best for the world. (But as I said, I’m no Kantian.)
Anyway, to get back to our subject, it seems to me that Levitt and Dubner’s critics are accusing them of bad blogging — but doing it in a book. When someone writes a blog post (or a newspaper op-ed) that is obviously a piece of advocacy where the author has mined the facts to find whatever supports his or her argument, a few people blast it in what has come to be known as a “takedown,” and then everyone moves on. If it’s in a book, though, then things get more serious.
I can think of four reasons for this.
- First, a book is disproportionate to its reviews. All Internet posts are formally equal, even if some people have bigger audiences than others; but no one is going to write a book debating Superfreakonomics, and if someone did, it wouldn’t sell as many copies.
- Second, a book is meant to be read by many people who do not follow debates on the Internet. With a book, the authors are reaching beyond the presumably skeptical and sophisticated audience of the blogs, out to the “general audience” where it can potentially do more damage.
- Third, books last in ways that Internet posts don’t. (At least they are assumed to.) There is an assumption that they are serious, well-researched, and fact-checked, while there is an opposite assumption that blog posts are none of those. Books are more likely to be cited in Congressional testimony than their meticulous takedowns on the Internet.
- Fourth, authors might stir up controversy in a book in order to generate sales.
In other words, it’s because The Book has a special place in our cultural environment. What’s ironic, however, is that few people read books anymore. Although Superfreakonomics will no doubt do well, people in publishing have told me that 50,000 copies sold will pretty much guarantee you a spot on the nonfiction bestseller list. By contrast, our Financial Crisis for Beginners page has gotten almost 200,000 pageviews, and a quick post on health insurance rescission that I banged out in a few minutes got over 80,000 pageviews on one day, thanks to the Huffington Post. (And our Atlantic article got over a million pageviews by mid-summer.)
So … I don’t really have a conclusion here. I think it’s good that people care about whether Levitt and Dubner cited their sources accurately. But I also think that this applies equally (or almost equally) to writing of the online type. And I worry that there really is no good mechanism to enforce accuracy on the Internet. Even among print newspapers, I’ve noticed that op-ed articles are not fact-checked, and some print magazines don’t fact-check either. Blogs, of course, have never been fact-checked. Counting on writers’ internal sense of duty isn’t going to work. And the marketplace of ideas is better at valuing heat than light. Ultimately the Superfreakonomics controversy is a sign of a much, much bigger problem, and one for which I have no solution.
By James Kwak