I’m borrowing Paul Krugman’s blog post title on this item by Ezra Klein. Krugman focused on the asymmetry of stupidity (actually, stubbornness) in liberals and conservatives, which results in the conservative party just being “stupid” all too often, lately. However, I want to address the actual research Klein is citing since it is being used and abused rampantly. It is deeply flawed in its conclusions, and it is frustrating to watch this cultural train wreck of a meme play out.
Jonathan Haidt’s book The Righteous Mind spends a good portion of its length presenting some of this research and making a similar conclusion to Klein.
Their assertion is that more information tends to polarize people because people tend toward selective perception and cognitive bias.
This much is true.
But you cannot then assert that more information (hopefully, we’re talking about “evidence”) is counter-productive.
The flaw here is that this research zeroes in on the group of people that DIDN’T CHANGE THEIR MINDS. It is not surprising that focusing on this group makes it seem like NO ONE EVER CHANGES THEIR MIND!!! (lol)
If these researchers (or these interpreters) would focus on the people that do change their minds, they would find that one way or another “more information” does, in fact, serve as the catalyst.
I know this is true because (besides everyone else’s experiences) it happened to me, in the most important ways it can happen to a person. Until I was about 25 I was a hard-core conservative. That changed radically as I learned some new things. I’m not sure that it matters here what those things were – what will change a person’s mind this way or that way varies by person and over time. The point, however, is one we are all copiously aware of: People do change their minds from time to time – and when they do, it is thanks to some new information that was meaningful to that person, or their acceptance of something they’d already heard. Suddenly, that new frame of reference changes the world for that person, sometimes in a small way, other times in a big way.
All this research Klein cites really shows is that people tend not to change their minds, and when additional info comes in, those who don’t change their minds will tend to skim the information for what they think the “signal” is, while ignoring the “noise.” We would be stupid creatures if we did anything different. The problem of “more information” is getting the “right” information to the “right” people (and maybe putting it in the right kind of attention-grabbing context.)
Here’s a paragraph from Klein that exhibits how stupid this argument is – and if he’s telling their line of reasoning correctly, how stupid these researchers are being:
Kahan and his team had an alternative hypothesis. Perhaps people aren’t held back by a lack of knowledge. After all, they don’t typically doubt the findings of oceanographers or the existence of other galaxies. Perhaps there are some kinds of debates where people don’t want to find the right answer so much as they want to win the argument. Perhaps humans reason for purposes other than finding the truth — purposes like increasing their standing in their community, or ensuring they don’t piss off the leaders of their tribe. If this hypothesis proved true, then a smarter, better-educated citizenry wouldn’t put an end to these disagreements. It would just mean the participants are better equipped to argue for their own side.
If this is true, then we shouldn’t make any cultural progress on “facts” of reality. We should still be debating the virtues of slavery, whether those women are witches – and whether witches really float or not. Obviously, we’ve made some cultural traction on these issues. How did we do it? Shocking news: some people did some research and issued some publications expressing a contrary opinion on these matters. Then, slowly, people’s opinions changed.
My God. How is that possible!?!?
So can we finally stop telling ourselves that people are locked-in to their views? Good grief – the whole Divergent series is dedicated to this absurdly facile concept. I realize that Divergent kinda-sorta tries to go the other way on this conclusion by the end of the series, but only in the weakest of ways, and only after a trilogy that supposes political differences have a predictable genetic basis (in the future). Ahem.
Addendum – further into Klein’s article, he describes Kahan’s study and results. According to Klein, higher math skill exposed even worse partisan results.
It’s an interesting study, but what I see goes back to the Memory-Prediction model of intelligence presented in On Intelligence, by Jeff Hawkins. Hawkin’s is not a neuroscientist – he’s the inventor of the Palm Pilot’s handwriting recognition scheme. But he was frustrated at the failure of artificial intelligence and set out to try to help break down what, exactly, makes something intelligent. As I think he rightly points out, neural networks as they have so far been constructed get it all wrong, and his book explains why.
He explains (rightly, I think) that what makes something intelligent is the ability to predict what is going on based off a memory system. What I think the above study found was people making an implicit prediction of what they believed to be true, and because they had a salient prediction, those with the math skill to know to diagram the “sample space” chose not to bother due to the salience.
This isn’t politics making us stupid, per se. In my mind, I immediately judge there to be a relevance discrepancy between gun laws vs the effectiveness of cream. When it’s a question asking me to judge the results of an actual experiment, I actually did open Excel (it’s just easier that way) and compute the sample space (I got the “right” answer on the cream question).
But when it’s looking at the results of a gun ban (with a sample size of just 300+ people? Deaths? Assaults?) I intuitively know that there are all sorts of complications to a study trying to build that case, and a sample space of “300” on gun violence could never be statistically significant in modern America.
I don’t know what variation’s Kahan conducted, but this may be the better explanation for why people who are good at math don’t bother – they know better than the “math-tards” right off the bat that there is a significant “prior” here – and the results reflect perfectly reasonable Bayesian computations that Kahan did not think to consider.