Tuesday, January 5, 2016

Why Is Macro So Hard? Overconfidence

This is related to my earlier post that part of what makes macro so hard is the metacognition deficit of policymakers.
Related to this is an argument set forth by Ram (there’s a primer below the quote):
I'd contend that the main problem in America is that the public, including its highly educated members, is social-scientifically ignorant. Most people I talk to about policy do not even realize that there is anything non-trivial about policy analysis. They want the government to make sure that four phases of rigorously designed RCTs be performed before drugs are made available to the public, for fear of unintended consequences of intervening on a complex system like the human body, yet they think they understand the consequences of highly complex interventions on human societies by introspection alone. Not only do they think they understand the consequences of alternative policy choices, but they're so confident that their understanding is right and that its truth is so obvious that the only explanation for disagreement is evil intentions. When I point out that on virtually every policy issue, at least somewhat compelling arguments for many conflicting points of view have been made by relevant experts, people usually react in disbelief or denial, or immediately retreat to questioning the motives of these experts ("of course they say that, they're on the payroll of Big Business" or whatever). These patterns of speech and behavior are uniformly distributed across the political spectrum, even if intelligence and knowledge of well-established facts is not. Even many experts in particular areas of social science evince no awareness of the lack of expert consensus on almost anything in their field, and give the impression of unanimity to an unknowing public.

My guess is that if you were to convince a supposedly non-utilitarian person that their (e.g.) deontological prescriptions might have terrible consequences, then they would revisit them. Anti-consequentialism is easy to maintain so long as you believe the consequences of your proposals are desirable, but most would fold if convinced otherwise. [the emphasis is not mine]
There’s some highbrow language in here, so let me translate a bit for potential students.

When Ram uses the word “non-trivial” about policy choices, what is meant is that many people think policy choices are trivially easy: this one’s right, that one’s wrong, choose the right one. Does this sound like Donald Trump? When Ram says people don’t recognize that the choices are non-trivial, it means that there are lists of pros and cons that have to be weighed without solid information. It’s not completely guesswork, but some of it might be.

When Ram mentions RCTs, the point is the ridiculous level of testing that for-profit corporations have to go through to “prove” that their medications are safe (even though any yahoo down the road can sell you a poisonous plant and call it a medicinal herb without even having a license).

When Ram mentions introspection, what’s meant is “thinking about it a bit”. For example, Obamacare is an example of multiple “highly complex interventions in human societies”. But how many people do you know that have spent much time thinking about whether Obamacare will work or not, before deciding that they’re for it or against it?

Ram says that many people believe the “truth is so obvious that the only explanations for disagreement is evil intentions.” Is that an argument President Obama has used repeatedly?

Ram writes “if you were to convince a … person that their (e.g.) deontological prescriptions might have terrible consequences, then they would revisit them.” Deontological is college-level word that means that you justify the correctness of your actions because you followed the rules or orders. (You know, that’s the excuse the Germans made about things the Nazis did). The prescriptions are recommended policy actions (just like your doctor might give you recommended medication directions). So, what’s being said here is actually rather hopeful: if you can convince people that the bad policy choice they made was because they were just following orders, they might reconsider it.

Lastly, Ram mentions “anti-consequentialism”. You’ve probably heard the old saw: the ends justify the means. That’s consequentialist: it means that it’s OK to do something bad initially if it ends up good on net. For example, parents call this tough love. To be anti-consequentialist is to think that the means or motivations are all that count: basically, if you think you’re doing good, then you are. Ram is saying that only worrying about motivations rather than consequences (thus, being anti-consequentialist) is an easy viewpoint to stick with if you’re not inclined to ever check the results. At least on Iraq, it’s fair to say the Bush administration was anti-consequentialist. Tie this altogether, and what Ram is saying (and what I’m applying to macroeconomics) is that world is a messy place, with a lot of gray areas, and there are way to many people acting as if the gray areas don’t exist at all.

This appeared in the comments to The Money Illusion, posted by Scott Sumner at EconLog.

No comments:

Post a Comment