Wednesday, May 1, 2013

More fake data promoting deficit hysteria

Ezra Klein:
You’ve heard about all the problems with Reinhart and Rogoff. But how about the problems with Baker, Bloom and Davis?

On Sunday, Bill McNabb, Chairman and CEO of Vanguard, published an op-ed in the Wall Street Journal arguing that “since 2011 the rise in overall policy uncertainty has created a $261 billion cumulative drag on the economy (the equivalent of more than $800 per person in the country).” This is proof, McNabb says, that “developing a credible, long-term solution to the country’s staggering debt is the biggest collective challenge right now.”

Specifically, the policy uncertainty McNabb is looking at comes from “the debt-ceiling debacle in August 2011, the congressional supercommittee failure in November 2011, and the fiscal-cliff crisis at the end of 2012.” There’s no doubt that these episodes hurt the economy.

But the Vanguard study. McNabb says, is based on the “invaluable work” of Stanford University’s Nicholas Bloom and Scott Baker and the University of Chicago’s Steven Davis. The Bloom, Baker and Davis measure of policy uncertainty gets a lot of attention — but it’s shot through with holes.

Policyuncertainty.com
Policyuncertainty.com


The best work on this has been done — once again — by Wonkblog contributor and Roosevelt fellow Mike Konczal. The key point is that the measure constructed by Bloom, Baker and Davis is driven not so much by policy uncertainty but by talking points that include the word “uncertainty.”
Permit me a long block quote from Konczal’s piece:
How do they construct the search of newspaper articles for their index, which generates a lot of the movement?
Their news search index is constructed with four steps. They first isolate their search to a set of articles from 10 major newspapers (USA Today, the Miami Herald, the Chicago Tribune, the Washington Post, the Los Angeles Times, the Boston Globe, the San Francisco Chronicle, the Dallas Morning News, the New York Times, and the Wall Street Journal). They then search articles for the term “uncertainty” or “uncertain.” They then filter again for the word “economic” or “economy.” With economic uncertainty flagged, they then filter again for one of the following words to identify government policy: “policy,” “‘tax,” “spending,” “regulation,” “federal reserve,” “budget,” or “deficit.”
See the problem? We don’t know what specific stories are in their index; however, we can use their search terms listed above to find which articles would have likely qualified. Let’s take a story from their first listed paper, USA Today, “Obama taking aim at GOP pledge on campaign trail,” from August 28, 2010 (for the rest of this post, I’m going to underline the words in quotes that would trigger inclusion in their policy uncertainty index):Brendan Buck, a spokesman for the House GOP lawmakers who crafted the pledge, said “it’s laughable that the president would try to lecture anyone on.” [....] Buck said the pledge was developed to address voter worries about high unemployment and record levels of government and debt.
“While the president has exploded federal spending and ignored Americans who are asking, ‘Where are the jobs?’, the pledge offers a plan to end the economic uncertainty and create jobs, as well as a concrete plan to rein in Washington’s runaway spending spree,” Buck said.
Spokespeople for the conservative movement tell reporters that President Obama’s policies are causing economic uncertainty. Reporters write it down and publish it. Economic researchers search newspapers for stories about economic uncertainty and policy, and create a policy uncertainty index out of those talking points.
There’s much more here (and even more here). Konczal goes on to note that a news article in which economists were quoted arguing that policy uncertainty isn’t hurting the economy would, under this measure, be counted as evidence that policy uncertainty is hurting the economy! “The empirical problems with this measure of policy uncertainty always bias the results upward,” writes Konczal. “Data is never perfect, so it is important to understand which way it is likely to bias. The noise machine of talking points biases this index upwards, but any stories pushing back against this uncertainty meme would also push the index upwards.”

The motto of the story is the same as in Reinhart and Rogoff: The numbers these studies — and the studies based on these studies — spit out look very solid and confident and authoritative. But when you get under the hood and look at the methodology, the process leading to those numbers often isn’t particularly convincing. And so the sweeping conclusions based off those numbers should also be taken with a grain of salt...

No comments:

Post a Comment