I just read the NY Times Room for Debate on whether email should be banned in the workplace. It’s an interesting discussion, but it’s really about a much bigger issue that is becoming far more important to all of us. Let me explain.
First, I think they’re all wrong when they say we need to eliminate the Reply-To-All button. I’m an engineer and email mostly with other engineers – and we legitimately use Reply-To-All every day to correct and revise the information we’re “discussing.”
Yes, there are other ways for this to be done, but the email “trail” is a useful record – and you never know when you’re going to need to look at the “history” of a discussion six months of two years into the future. Sometimes that’s petty, by sometimes it’s very, very important.
Some social networking tools don’t address that historical trail (such as collaboratively edited documents). Even though that’s what some of these emails are ultimately accomplishing, the discussion is still an useful – and sometimes important – record.
Other tools are just “discussion boards” but they ultimately require the right people read and assess the information and too often the person that’s doing the latest writing isn’t 100% sure who needs to receive the information – so they CC or Reply-To-All – with a certain amount of error. Simply choosing to not CC or Reply-To-All would create a whole new cost problem – you may have excluded people that needed to know that information. Resolving those exclusions (especially if systemic) and their consequences – which could be huge – also adds up quickly.
What we’re really talking about is the limits and costs of human information processing – how fast you can read and accurately assess information.
Suppose we live in a different world where we can all plug into a neural-interface whenever we want. When we’re connected like this, “the internet” becomes directly accessible to us just as we access anything else we “know” or “remember.” So in this world, when I send you an email, you instantly become aware of the email and the information it contains just as you would any memory. Furthermore, you wouldn’t need to take time to understand this information – imagine that process of integrating the information and it’s implications (if any) into the rest of your memories was done instantly. I realize this may never be a reality, but if it were, there would be no penalty to CC’ing “the world” on everything, except where keeping a secret was preferred.
The purpose of that analogy isn’t to dream about some distant future technology. The purpose is to demonstrate that this problem isn’t really about email. It’s about the aggregate capacity of human beings to get messages into their brains and then assess, process, or integrate that information into all the rest of what they know. We’re not just talking about about the cost of too-many-CC’s – we’re talking about the problems of The Blogosphere, Fox News, the Republican Party, Political Polarization, Dishonest Campaign Ads, misunderstandings about Economics, the efficiency of Democracy – and on and on. These are all restatements of the same basic problem – the limits of human information processing, and the degree to which we are teaching future generations how to do it better.
This matters because the number of people on the planet is only growing. The number of people getting online is only growing. And we’re storing more and more information – and more and more of what everyone is saying. Trying to weed out false or inaccurate statements is already virtually impossible. Imagine the day when a great lie is told but the correction is found by almost no one. Are we living in that day already?
Fox News is an interesting problem here. I think it’s obvious that most viewers of Fox News want to get “fair and balanced” – and accurate – information. And I think for the most part, the providers of information at Fox News intend to provide that most of the time. The problem is that even if all of those ratios are 90%:10%, that’s still an enormous amount of false information – and there are huge societal and economic costs associated with it.
Also, assume with me that the average Fox News viewer wants accurate information. Suppose many of those viewers know Fox News is wrong some of the time. Is that knowledge enough to solve the problem? No. Those viewers need to believe that they have a reasonably good chance of going online and checking facts in an acceptable time frame. This is clearly not the case now and may never be the case simply because they wouldn’t know what facts to check. They’d have to know which facts are suspect and which aren’t.
So what, you ask? “Easy solution: Don’t watch Fox News.” Well, yes, perhaps. But that’s missing the even bigger point of how you came to that conclusion. How do you know that what you think you know is really true, or that there are key facts you don’t know? “You don’t know that – you never do.” True. I can tell you what I do to resolve that – I try to learn everything I can about everything I can and just hope I get a truly “fair and balanced” – and accurate – view of the world. But I also know this is impossible – and to the extent I ever truly achieve accuracy, I can’t really take any credit for it. It’s really all about chance and luck.
And here’s the really important thing. Fox News viewers are in the same boat. If you’re like me and thoroughly understand and endorse Evolution, then you know what I mean when I say I have zero desire – nay – a repulsion – to learning about how the ancient vikings believe Thor caused life to arise on Earth – for its own sake. As a historical curiosity – to be briefly summarized, okay. But we have no desire to invest time on it because it’s almost sure to be a pointless endeavor – and has an opportunity cost – we could be spending our time learning things that really matter. It’s an Information Processing problem – a tradeoff. If the information was “free,” then, “Why not?” But it’s not, so we don’t want it.
Well, that’s how a Fox News viewer feels about most of what Fox News gets wrong. They are making the same calculation but on other terms. And since neither you nor they can truly take credit for knowing enough to identify errors when you see them, your ability to blame those Fox News viewers for being wrong and supporting a fake news service is – shall we say – compromised.
What we really need here is an improvement to the human ability to acquire and assess information. We need to read faster and think more clearly. Otherwise, we need to have a centralized system of flagging and sorting out erroneous information. Why centralized? Well, I’m not suggesting we delete false information everywhere in the internet – but the internet as it currently exists doesn’t lend itself presenting a blog in Iran that’s really propaganda, or inaccurate product information on a South American manufacturer’s website – along with the corrections embedded in those pages. The internet is totally free in it’s current technological form. A new kind of centralized system would be needed to ensure corrections are presented along with erroneous information.
But even that doesn’t solve the problem, of course. Having a correction adjacent to errors doesn’t ensure it is read or that is understood. But it helps. It would be an improvement. But it would be at risk of abuse, too, of course. But the free structure of the internet currently creates the problem that you don’t know what is true and what isn’t – same as the diligent Fox News viewer. You can do your own fact checking all you want but you’re still in the same boat of not knowing what you don’t know – not knowing if someone out there has the perfect rebuttal to what you think is true.
Notice, importantly, that I’m not talking at all about our inability to truly “know” what is “true” or not. I think anyone that’s informed nowadays knows that we can’t ever truly know anything. But we can have a good idea. We use probability to assess what we believe to be true. In this respect, faith is an everyday part of all our lives. We all believe – have faith – that the sun will rise tomorrow, so we live our lives despite not truly knowing whether we’re in some kind of “Matrix” or are all going to die from an asteroid impact tomorrow. No, that’s not what this is about.
This problem makes what I think is a safe assumption, that there is information out there that would effectively convince you of an ideological or factual error, and the problem is simply getting that information to you, and enabling you to assess and integrate it – realize what it means.
What I’m saying is that anything we can do to speed this process up will be a very, very good thing that would affect all aspects of life. I’m saying that most people probably haven’t even appreciated the nature of the problem – we’ve all experienced a discussion with someone where we knew what that other person believed and we knew why they are wrong, but we were unable to present enough information to them (credibly) for them to assess and integrate that information. But even though we’ve all experienced that, I think most of us haven’t really appreciated that this is much bigger problem that’s just getting worse in the Information Age.
It could easily be the case that the Information Age must be followed by an information Integration Age – an age where our technological efforts become primarily focused on how we improve the speed with which humans get information into their brains and correctly use it.