Misunderstanding the Mistake in Black-Scholes and Fama’s Rational Markets

Abstract: Probability has been getting a bad rap in finance and economics due to modeling mistakes that have led to many of the biggest financial catastrophes of the past 50 years. This has bled over into popular misconceptions that the human mind has some mystical power arising from Free Will that explains market unpredictability and even the power of the mind over the universe around us. However, physics and probability can, actually, allow us to model markets just as we do particles (to a high degree of certainty) if we properly identify the mistakes of previous attempts. These mistakes include the collapse of Long-Term Capital Management, the 2008 financial crisis (as a result of misuse of Value at Risk), Black-Scholes, Eugene Fama’s “Rational Markets” work, and even underlies the misunderstanding of the Phillips Curve that ultimately led to stagflation of the 70s. Probability estimates failed in all of these because they did not model feedback loops, and assumed the volatility being sampled was not merely random, but also a “closed-system.” A proper market model must account for the feedback loop created by introspection – the property of humans and markets to act on any prediction, undermining the prediction. Introspection does not make humans and markets unpredictable – these feedbacks are still fundamentally deterministic, and are the hidden correlation that causes financial volatility to exhibit a “fat tail” when compared to the Normal Distribution.

The effect of this feedback is analogous to analyzing the entropy of a system of non-intelligent particles when the system is not “closed” – ie, is subject to periodic exogenous influences – which can cause sudden, rapid changes in the apparent entropy as during a phase transition. Example: a water vapor cloud is exogenously cooled into liquid or solid, and the hydrogen bonds progressively “communicate” a completely different, non-linear order onto the system. Example #2: paramagnetism: introduction of a relatively small magnetic field suddenly imposes order on a system of particles that is much larger than the field’s extent and strength in a vacuum. The “randomness” of the particles under inspection is still perfectly random when the exogenous effect is “absent” (that is, sufficiently “distant” such that it seems to be “outside” the “system”) – and thus Probability and the Normal Distribution are perfectly sound if the entire system is properly accounted for.

Thus, the “fat-tails” in finance, the apparent non-randomness of markets, and even the consequences of “Rational Expectations” are ultimately still within the domain of accurate probabilistic predictions. Further, this reality supports (and provides further strong evidence for) the philosophical positions that the effects of subjective human “consciousness” and “free will” do not rise above deterministic Materialism.

I’m reading an “oldie” right now, When Genius Failed: The Rise and Fall of Long-Term Capital Management, by Roger Lowenstein. It repeats the assessment presented in his other book, The End of Wall Street, and many other books on economic crises, such as All the Devils Are Here: The Hidden History of the Financial Crisis, by Bethany McLean and Joe Nocera, as well as Micheal Lewis’s pair, The Big Short: Inside the Doomsday Machine, and Flash Boys: A Wall Street Revolt, as well as quite a few books written by economists.

At their core, they indicate that modern financial collapses tend to arise when highly educated people put too much predictive value in the science of probability, then leverage-up in a bad bet. These bets-that-go-bad are critiqued as failing to appreciate the fundamental uncertainty in the market. As Lowenstein writes in When Genius Failed, the problem is that markets are almost rational, almost predictable (using probability and enough historical data), but not quite – and the problem is that you can’t identify the start of unpredictability accurately enough to avoid it.

The Black-Scholes model is frequently cited (as is Value At Risk), both of which simply treat asset pricing as a random variable whose value obeys the Normal Distribution. If it does obey that distribution, you can assign very precise probabilities to the likelihood of the value falling within any particular range – and many firms have started making their money (and also collapsing) by placing bets using these probabilities. These probability models arose from the natural sciences, such as Entropy, where they really are extraordinarily accurate. Thus, math and physics nerds brought what they’d learned to finance and economics, and birthed the notion of “quants” – very smart people who believed that there was no good reason why markets shouldn’t ultimately be about as predictable as atoms and particles.

This post is about how far the probability-critics are right, but how they are also wrong – and they’ve gone too far. The academics and quants simply have not yet correctly mapped the physical models to market models – and this mistake can now be fixed. It’s an important issue, because this mistake led to many of our worst economic failures over the past 30+ years, but also points the way to avoiding the mistake and making market predictions that are much closer to the inherent Heisenberg Uncertainty limit. It’s a rebuttal to the current anti-academic, anti-quant view that even many academics and quants have now adopted (such as Nassim Nicholas Taleb in The Black Swan), questioning whether the Normal Distribution is ever as accurate in human systems as it is in physical systems. This is a modern quantitative mysticism that, while not explicitly spiritual, leads many to suppose that there must be something about (human) consciousness that rises above physical reality and Quantum Mechanics. This has led to the misunderstanding of the Observer Effect as apparently showing that, for instance, electrons behave differently when a human is watching what they do, leading to the popular documentary, What The Bleep Do We Know?, which is just the cover photo for a dizzying field of books and videos by “experts” that will explain to you how modern physics shows that your mind controls the universe around you, from The Secret to, surprisingly, the popular advice of people like psychologist Daniel Goleman, that there’s a hidden “power” in “positive thinking.”

The popular judgement on failed market bets based on quantitative probability has coalesced on these conclusions:

  1. The sample of historical values is too limited.
  2. The prices of pairs of assets are not independently random, but correlated.
  3. People (and as a result, markets) are not as rational as we think.
  4. People (and as a result, markets) occasionally exhibit “herd” behavior.
  5. People (and as a result, markets) have a fundamental degree of uncertainty that cannot be overcome, arising from the simple fact that humans can “choose,” and thus anyone trying to model human behavior is just another idealistic fool.

The sample of historical values is too limited. If more data were available, the probability estimates would be more accurate. Some think this means the uncertainty would be smaller, but this is not accurate. Historical data contains more (unexplained) volatility, so the uncertainty increases when you include it. This makes your probability estimates “more accurate” but less useful, because adding in historical data ultimately just increases your “error bars”. For example, suppose an asset is hovering around $100. Based on the past few years of price data, probability suggests a price estimate of $100 (+/- $5 about 90% of the time). But if you include the past 50 years of historical data, the estimate is now $100 (+/- $15 about 90% of the time). The latter represents a more shallow normal distribution (like the yellow one, above), and more accurately matches the rare price spikes and collapses – but is less useful in the short-term, because the former estimate tends to be accurate 99% of the time in the “current” economy, when markets are “normal”. The difference between the two estimates is what writers like Taleb call a “fat tail” (ie, the actual distribution isn’t quite like the normal distribution – it has a longer, wider tail than it “should” based on current volatility). In the book on LTCM, Eugene Fama (winner of the Nobel for the “Rational Market” theory), while defending the idea that markets are rational, is noted as accurately admitting that crashes like the 1987 stock market crash should only happen once every 5000 years or so, if the usual volitility we see in them is interpreted using a normal distribution, but they clearly happen more often than that.

(to be continued)


Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Sovaldi: Risk vs Uncertainty, Innovation vs Facilitation, and How Patents and Ownership Don’t Differentiate Them

(This is an extension of my previous two posts on Sovaldi, particularly the second one.)

Piracy is simply the free market saying, “Your markup is ridiculous.”

As Nassim Nicholas Taleb mentions in his books The Black Swan and Fooled By Randomness, there’s a difference between risk and uncertainty. I’m sure I’m going to butcher the distinction, but roughly speaking, risk is the more or less finite quantity you know you may loose if a bet goes badly. Uncertainty is the unknown behind probability estimations – the degree to which your estimations are just wrong – and how wrong. Uncertainty is open-ended. Risk is supposed to be a tangible, calculable quantity. I’m uncertain if a meteor will destroy the Earth in some time period, but if one does, I can estimate the value at risk of this happening (by saying this, I do not mean to defend the historical use of VAR in finance).

When Gilead’s vice president Gregg Alton says,

“Those who are bold and go out and innovate like this and take the risk — there needs to be more of a reward on that. Otherwise, it would be very difficult for people to make that investment.”

he is actually mixing risk and uncertainty, and innovation vs facilitiation, in a socially destructive way. Pharmasset, the company that developed Sovaldi, was bought by Gilead in January, 2012 for $11.2 billion, as the results of clinical trials of Sovaldi crystallized, was the organization that worked through the uncertainty of real innovation. When they started working to produce the RNA polymerase inhibitor of the hepatitis C virus, they didn’t know they would be successful at all. It took years of preemptive effort to realize success. At the end of that process, Pharmasset had produced something new and unique for our society – a new cultural asset that should last as long as humans maintain knowledge. And it is an asset that will likely continue to benefit our culture long after the current 175+ million people who current have hepatitis C are cured or, effectively, die waiting for the financing to obtain treatment. The employees and shareholders of Pharmasset were compensated, in addition to the salaries and wages they’d already been paid, $11.2 billion for this work. I don’t know how much of that ended up back in the hands of venture capitalists who were merely financiers – not innovators, but I imagine it’s a significant chunk. Still, since I believe such venture capitalists redirect their earnings to funding more innovative start-ups, I really don’t begrudge anyone in this $11.2 billion payout (well, besides the few that very probably gained a lot more than they earned – but I believe this is the least of our problems in this series of events).

When Gilead purchased Pharmasset, Sovaldi was in final clinical trials. It’s effectiveness was largely known. The final trials seek to illuminate optimum dosing, treatment duration, dosing of the cooperative drugs interferon and ribavirin, and get a handle on side-effects (so far, Sovaldi seems to have few if any side-effects, since as far as is currently known or disclosed, it’s only apparent effect is to inhibit the particular variant of RNA polymerase present in hepatitis C viruses).

So what did Gilead have to do between the time it purchased Pharmasset and the time the FDA provided expedited approval for Sovaldi? It had to complete the clinical trials (these would have been completed anyway, of course). It surely did a lot of legal hand-wringing on liability considerations. It surely spend several (over-priced) millions on people who ushered the drug through the FDA process. A few million were surely paid to ramp up production processes and branding, etc. But in all of this, was there any uncertainty? Relatively speaking, no. By the time much of this money was spent, the knowledge that they could either sell their work to another pharma-giant or obtain remuneration for their work through selling the drug themselves was essentially guaranteed. And that’s just looking at it from the perspective of the corporation. The actual people within Gilead were already paid throughout the process. Even if Gilead suddenly, mysteriously collapsed, much of their personal gains would already be locked-in (even the $11.2 billion paid for Pharmasset was partially financed by banks, not Gilead).

The point is that the purpose of innovation – and the reason we ought to remunerate it – is to offset the risks associated with the uncertainty of working on an innovation, especially when it takes years to develop the innovation. What Gilead did was not innovation. It wasn’t even unique. Any pharma-giant would have been happy to do it, and more than capable of doing it. The work they’ve done adds essentially zero to the cultural wealth of the human race. Yet they want to extort as much as $150 billion from the human race for “taking the risk” of doing whatever it is they think they are doing for us.

But we didn’t ask them to do that for us. They volunteered! Suppose Merck had bid $10 billion for Pharmasset, and Gilead won the bidding by offering $11.2 billion. By doing so, Gilead proactively volunteered to do what it has now done over the intervening 27 months (or so). In effect, Merck would have been saying, “I’ll do it for $10 billion…?”And Gilead said, “Hmmm. No, we’ll do it for the extra interest costs on $1.2 billion more than Merck is willing to volunteer.” If Gilead now finds their work so onerous that they can’t bring themselves to do it for anything less than $150 billion, I offer up advice to them of, “Go fuck yourselves!”

In the Gilded Age of Robber Barons, we gradually formed the FTC to regulate – and block, if perceived necessary – firms from using public law and physical realities to create what we call a monopoly. But what, exactly, is a monopoly – the bad parts, anyway? At it’s core, the part that makes a monopoly economically and culturally destructive is using the social constructs of ownership and/or patents to set the price of some good far higher than the mark-up the rest of society is customarily charging for most other goods. This is why we perceive “monopoly rents” as “unfair” – the monopolist is asserting that they deserve much more profits than the rest of society is offering.

This isn’t the way monopolies are typically presented, but just consider: Gilead is saying they somehow need or deserve a markup of 41900%. This bugs us because in our current economy, typical markups (of all services – not just the markup on the final product cost) run from 10% to, say, 2000%, with the distribution heavily skewed to the 10%-200% range. Getting everyone to increase their markups to something of the same order as Gilead’s in a free-market economy is virtually impossible due to the phenomenon John Nash described as an Equilibrium. Thus, we feel powerless to retaliate against Gilead in the way we would really like to – the way a “free-market” is supposed to retaliate against elaborate prices – by using counter-pricing to either “extort” a more reasonable price, or indirectly finance a competitive product. Since we can’t coordinate this level of action, we feel this nebulous unfairness about markups like Gilead’s. We may even refer to it as “anti-social” or just “immoral.” But at it’s core, the only problem with such a markup is the fact that we’re unable to change all other markups to match Gilead’s (or the futility of doing so, since if we actually could do that, society would likely find itself in an endless markup-war, upward markup spiral (downward moral spiral)).

This is a paradox of free markets. We’re all “free” to set prices of products we “own” however we want – except that this is an illusion. We aren’t actually “free” to do that at all. Our freedom to set prices is dictated to us by the reality of the number of alternative products – or alternative “pricing paths” – such as piracy or legal retaliation.

That’s where the FTC comes in. One way we deal with monopolists is to use the apparatus of government to implement a socially coordinated legal retaliation – to get past our social Nash Equilibrium that may otherwise leave us all frustrated but powerless to respond. Another path is to fund an alternative drug and then “force” (wait, I thought the market was “free”?) Gilead to lower their price (or make no sales). Along this line, Gilead’s pricing is in-effect saying that they either:

  1. believe it is extremely unlikely anyone will find (or there even exists) any other hepatitis C virus RNA polymerase inhibitors, so they can charge whatever they want
  2. believe other companies are on the verge of bringing other hepatitis C virus RNA polymerase inhibitors to market very soon, so they need to recoup their $11.2 billion very quickly (unlikely)
  3. believe in setting their price so high that they implicitly foster the development of piracy

#1 and #3 are not mutually exclusive, of course, and #3 is inevitable, given the price Gilead is charging. The only question is whether Gilead was consciously choosing to foster piracy of their product? I think we can assume they aren’t that dumb. Therefore, we should also assume that Gilead plans to use the patent and legal system to suppress pirated copies of their drug from being available, and enable themselves to charge the 41900% markup. I can imagine they may even complain about their legal expenses. They might even try to take action against Americans who fly to India to buy the treatment there at the “low thousands” price they plan to offer the less wealthy nations.

Well, I would propose our legal system explain to Gilead that it isn’t in the business of policing the market. This would, actually, be the non-judicial-activism proposal. Piracy is simply the free market saying, “Your markup is ridiculous.” Gilead could subvert piracy by simply offering the product at a competitive price – or selling their ownership of that product to another company that is willing to market the product at a competitive price. In effect, when piracy takes the form of selling the equivalent product at a much lower price (as opposed to misrepresentation of a brand name, or misrepresenting equivalence) the only reason piracy exists – and is perceived as a “problem” – is because the firm or person complaining about the piracy overpaid for ownership rights. The “free-market” solution would, actually, be to cut your losses and move on, not use the courts and law to try to make up for your past purchasing error.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , | Leave a comment

Sovaldi: What Price Innovation?

Just a brief addendum to my post on the $84,000 apparent hepatitis C-cure, Sovaldi. (See also addendum #2)

Gilead’s vice president Gregg Alton said, “Those who are bold and go out and innovate like this and take the risk — there needs to be more of a reward on that.”

Let’s just parse that a little bit. Surely Alton wasn’t trying to put forward absolute truth or perfect phraseology, but his casual sentiment runs strong among many (mostly conservatives) in the modern economy.

Define “more.”

At what point does the reward for innovation and risk-taking become sufficient, and any further increase becomes gratuitous? Proponents of the Laffer Curve need to appreciate that it cuts both ways: do you think a tax cut will boost the economy by simply rewarding productivity more? Fine – that’s a brood, average assessment of the entire economy. But the exact same line of reasoning implies that, across the full spectrum of incomes – that is – prices for services – there are necessarily a significant fraction for whom increased remuneration is gratuitous (ie, frivolous) – and for this group, a fall in their remuneration would be more than offset by the gains of rewarding others more.

Gratuity is the key word here, since what he’s talking about is how much is our culture willing to pay – to tip – for pro-active, creative productivity? After all, you don’t tip a server first, then wait to see if the server earns the tip (in your opinion). In our modern way of doing things, we ask our server pro-actively risk spending their valuable time and charm on us for an uncertain reward. We ask them to implicitly bet that more people will make their preemptive goodwill worth their while than not. Then we tip them, based on how much we think their service was worth, balanced by our means to part with our money. Servers that want to receive larger tips know they will need to work places that attract a clientele with significantly greater wealth.

The way we acquire innovation in the current economy works the same way. We don’t want to explicitly socialize the costs of all the start-up speculation (beyond the basic social safety-net – and even maintaining that is a political struggle), but we do want to obtain the benefits of the successful innovations, after innovators have (usually) strained and put their time and money (often limited, in this age of income disparity) on the line.

But what Gregg Alton and many like him are turning away from is how much do we actually need to reward people to encourage them to work up their innovations? If the price of Gilead’s new drug is any indication, Alton doesn’t show up for work for anything less than about a billion dollars. If I was his employer, I’d fire him. That kind of risk premium is too rich for everyone.

Posted in Uncategorized | Tagged , , , , , , , , , , | 1 Comment

The Price of Sovaldi: A Pound of Flesh

sovaldiA new hepatitis C treatment was approved in December 2013 via an expedited FDA program because the new drug, used in combination with one or two existing (relatively inexpensive) drugs has a virtually 100% cure rate. The drug, Sovaldi, marketed by Gilead (Nasdaq:GILD), is not one of the new, complex pharmaceuticals falling under the moniker “biologics.” It’s just a few carbon rings around a phosphorus linkage that just happens to stuff itself into the RNA polymerase enzyme that the hepatitis C virus requires to reproduce. Outside estimates of the actual cost to manufacture the drug (excluding development costs) is about $150-$250 per treatment course per person. So how much is Gilead charging?

About $84,000 for the shorter 12-week treatment course, double that for the longer 24-week treatment (which treatment a patient needs depends on the particular genotype variant of their infection, and whether they can take the helper drugs or not – Interferon and Rifampicin).

Okay, full-disclosure, I can be accused of having a biased view of the price for this drug, since I currently work for UnitedHealth, and the only reason I heard about the drug is UnitedHealth announced in its quarterly earnings report that the unexpected price of this drug had helped to subtract about $100 million from Q1 earnings. That said, I don’t actually hold any stock in UnitedHealth at the moment, and while I like having a job, the precise quarterly earnings of my employer are not very high on my mind. I’m just a programmer. I don’t speak on behalf of UnitedHealth, and no one at UnitedHealth has ever mentioned this drug (or even drug costs) to me. Also, as other stories have indicated, sooner or later the cost of this drug will simply be transferred to premiums for everyone (and taxes, for Medicaid), so the cost of the drug isn’t ultimately very important to UnitedHealth or any other insurance company – they will either choose to exclude coverage, or cover it, and either way, they will recoup the price in premiums. Anyone that wants to dismiss my view on grounds of bias will have their work cut out for them. (Fortunately, I also do not have hepatitis C – but anyone that thinks this is a fair way of deciding bias should consider that plenty of people contracted hepatitis C through absolutely no fault of their own, such as through blood transfusions. This doesn’t make their opinion dismissible on grounds of bias.)

That said, Gilead apparently defended its price this way:

…a vice president at Gilead, says the high price is fully justified. “We didn’t really say, ‘We want to charge $1,000 a pill,’ ” Alton says. “We’re just looking at what we think was a fair price for the value that we’re bringing into the health care system and to the patients.”  – Gregg Alton, Gilead vice president

About three million Americans have hepatitis C (the largest contributor to liver transplant backlog of 17,000), while about 170 million are infected, world-wide (much worse than AIDS). Since Gilead thinks only the American medical system will pay $84,000 for treatments, they are working with Indian generic pharma manufacturers to produce the drug for the Third World at a price of just

“…from the high hundreds to low thousands for these types of markets.”

Alton says critics should “look at the big picture.”

“Those who are bold and go out and innovate like this and take the risk — there needs to be more of a reward on that,” he says. “Otherwise, it would be very difficult for people to make that investment.”

Okay. Who were the innovators and how much did they pay to develop the drug? It turns out Gilead purchased the smaller company that developed the drug, Pharmasset, completing the purchase in January, 2012, for about $11.2 billion. The deal amounted to Gilead, hearing that Pharmasset had completed development and was starting the final clinical trials for the drug, offered Pharmasset shareholders an 89% markup on their stock, as of the closing price prior to the announcement. (If you look at a long-term chart of Gilead’s stock price, you see that they’ve been relatively flat for years – but rose to 400% their previous 5-year valuations in the year following this purchase. So they purchase a drug for $11.2 billion, and as a result move from a market cap of ~$45 billion to a market cap of $130 billion – and they call this risk?)

I wonder how much stock the researchers who did the real work possessed at the time of sale, and how much was held by mere speculative shareholders that capitalized the work (and by “capitalized,” we’re excluding any bank loans and bonds, which capitalize the enterprise but do not benefit from success or failure), and then the executives that, um, …facilitated... the other two.

Anyway, let’s suppose that Gilead spent an additional $1-2 billion taking the drug the final leg to market. How many American treatments are needed to recoup Gilead’s “risk” at the current price? About 150,000. If we suppose two million American’s actually get the (short) treatment, Gilead’s profit will be… (drum roll….) $145 billion (I subtracted $1000 from each treatment to cover Gilead’s “production” and “logistics” costs).

Who will be paying that $145 billion? Not the patients! 99% of Americans can’t afford that bill. Besides, what appears to be happening here is Gilead has decided that now that most Americans are required to have insurance, they can set the price a little higher since the insurance will most likely take up most of the costs. Otherwise, the patient will simply not receive the treatment. Since the liver damage caused by hepatitis C is progressive, and since organ donation rates in the US are so low (and even when available, the price of the surgery is even more than the drug), patients that delay the treatment face a bleak choice: consider mortgaging everything they and their family possess – and live – or accept a short and painful life of treating their symptoms, terminated by liver failure – and death.

A pound of flesh. Gilead would make Shylock proud.

Will Gilead lower the price once they’ve recouped their profits?

“That’s very unlikely that we would do that,” responds Alton, Gilead vice president. “I appreciate the thought.”

Just stop and appreciate the “risk” that $145 billion is rewarding. Can you feel it? Ahh…. that’s good Capitalism!


We do need to reward the people that work in the face of uncertain results to produce technological advances. However, far too many people have deluded themselves into thinking that the only reason anyone does any work is to amass financial rewards in the future. This isn’t why people work – and even for those that really do work with this as their primary motivation, the reason it is their primary motivation is due to an evolutionary pursuit of the social status they believe the money will impart to them. Consider: suppose the US government cuts Gregg Alton a check for $145 billion. Mr. Alton will be happy. Then the next day the same quantity is given to everyone else. Mr. Alton is no longer happy. Even if you could do this “experiment” without causing inflation – ie – despite everyone holding $145 billion, the price of all goods stayed the same – Mr. Alton will not be happy with his money for long. He will soon learn, as almost everyone that has ever won the lottery has learned, that a pile of money doesn’t make you feel completed. Sooner or later, human beings get bored once we’ve acquired everything we wanted prior to receiving the pile of money – unless we transform our desires into some new pursuit. But you don’t need a pile of money as a prerequisite to realizing that personal fulfillment comes from the pursuit of some goal.

Gilead will never actually get their $145 billion. It’s just too much money for too few people for doing so little. (And I do realize that in the current legal environment they need to plan for $5-15 billion in legal charges). The price will have to come down to something that median (American) patients can afford – or, equivalently, a price that insurance groups and Medicaid can tolerate. But in the meantime, Gilead really will extort $84,000 from quite a few patients. As a society, we ought to recognize this for what it is – criminal. The rewards to Gilead are so high that it actually creates an incentive to them to pay people to go out and spread hepatitis C.

It’s about time that we revamp the way we fund “innovation.” There are far too many stupid, almost worthless “patents” built-in to expensive (due to mass-production) systems – such as the idiotic iPhone – Android patent battles. Likewise, far too many valuable innovations are being locked-up and made relatively useless by current patent and copyright laws, such as this drug (assuming it really is as effective and side-effect free as it appears). Rewarding innovation is crucial – not because people wouldn’t take risks to produce anything if they weren’t exponentially rewarded – but because rewarding innovation is a fairly reasonable way to fund the people with the best ideas. We should see it as a way of paying them to go back to the drawing board and produce another hit. Giving people billions doesn’t make them more (or as) productive. Giving them billions is just as likely to make them (and their progeny) social liabilities.

With that in mind, we ought to make a crucial change to patent law. Patents should not have a fixed time period. Rather, patents should expire as soon as the developers recoup their development costs (if any), or 10 years. In the case of a good drug, this might occur in just one or two years. In the case of a failed product (or one that has no real cultural/economic value, such as the iPhone’s rounded corners – a trademark, not an innovation), this might be never. Suddenly, stupid patents become stupid again. Right now, stupid patents are smart because even stupid patents confer some capacity to extort monopoly rents.

But all too often the developers of a patent may struggle to invent anything else after creating their blockbuster product. In addition to recouping their costs, a successful patent ought to reward the people that developed the product with a Nobel-like payment (or regular payments) as both a continuing reward, and an expression of appreciation, but also as a way of subsidizing their work and facilitating future innovations. However, future innovations should still have to go through the “funding process” that any new innovation needs to go through. The Nobel prize confers only $1.3 million (or so, if not shared). This isn’t enough to encourage indolence, and it likely isn’t even enough to fund another scientific project. But a one-time, inflation-adjusted lump sum of $2 million or so – or 10-20 years of $100k-$200k annual payments – is enough to confer the kind of economic “security” that at least has a good chance to facilitate and encourage future development, without purchasing the “reward for innovation” at such a high price that it actually suppresses progress for the rest of society, preventing too many people from using the innovation, and/or encouraging development of alternatives that do the same thing in a slightly different without bringing any new value to the market (other than avoidance of royalties) (example: smartphone single-press links to place a call. Apple holds a patent on this “innovation” so Android manufacturers, not wanting to pay the price of the royalty Apple wanted for this “technology,” made their phones require that users press a second button before a call can be placed).

As for legal risks, once a patent has “expired,” it’s legal liabilities should be transferred to the society at large, in the same way that liability for the wheel does not fall on any existing person or company. If a product is later found to have hidden costs or side-effects, victims should be compensated for damages, but punitive damages would be unnecessary and pointless. If the developer of the product can be found to have known about the hidden costs or side-effects, they should be charged with a crime. Otherwise, their award payments would be terminated and possibly clawed back, depending on the degree of “ultimate worthlessness” of their product. That said, for good products (the ones that people continue to use safely for years afterward), the removal of this legal liability alone would tend to foster a significant cultural change, both toward discouraging a whole class of frivolous lawsuits, and removing one of the largest uncertainties potential inventors face – and while necessity might be the mother of invention, uncertainties are its enemy.


Posted in Uncategorized | Tagged , , , , , , , , , , | 2 Comments

Ezra Klein’s Asymmetric Stupidity on the Futility of More Information

I’m borrowing Paul Krugman’s blog post title on this item by Ezra Klein. Krugman focused on the asymmetry of stupidity (actually, stubbornness) in liberals and conservatives, which results in the conservative party just being “stupid” all too often, lately. However, I want to address the actual research Klein is citing since it is being used and abused rampantly. It is deeply flawed in its conclusions, and it is frustrating to watch this cultural train wreck of a meme play out.

Jonathan Haidt’s book The Righteous Mind spends a good portion of its length presenting some of this research and making a similar conclusion to Klein.

Haidt poses with another person who thinks it's normal to not change your mind in the face of evidence...

Haidt poses with another person who thinks it’s normal to not change your mind in the face of evidence…

Their assertion is that more information tends to polarize people because people tend toward selective perception and cognitive bias.

This much is true.

But you cannot then assert that more information (hopefully, we’re talking about “evidence”) is counter-productive.

The flaw here is that this research zeroes in on the group of people that DIDN’T CHANGE THEIR MINDS. It is not surprising that focusing on this group makes it seem like NO ONE EVER CHANGES THEIR MIND!!! (lol)

If these researchers (or these interpreters) would focus on the people that do change their minds, they would find that one way or another “more information” does, in fact, serve as the catalyst.

I know this is true because (besides everyone else’s experiences) it happened to me, in the most important ways it can happen to a person. Until I was about 25 I was a hard-core conservative. That changed radically as I learned some new things. I’m not sure that it matters here what those things were – what will change a person’s mind this way or that way varies by person and over time. The point, however, is one we are all copiously aware of: People do change their minds from time to time – and when they do, it is thanks to some new information that was meaningful to that person, or their acceptance of something they’d already heard. Suddenly, that new frame of reference changes the world for that person, sometimes in a small way, other times in a big way.

All this research Klein cites really shows is that people tend not to change their minds, and when additional info comes in, those who don’t change their minds will tend to skim the information for what they think the “signal” is, while ignoring the “noise.” We would be stupid creatures if we did anything different. The problem of “more information” is getting the “right” information to the “right” people (and maybe putting it in the right kind of attention-grabbing context.)

Here’s a paragraph from Klein that exhibits how stupid this argument is – and if he’s telling their line of reasoning correctly, how stupid these researchers are being:

Kahan and his team had an alternative hypothesis. Perhaps people aren’t held back by a lack of knowledge. After all, they don’t typically doubt the findings of oceanographers or the existence of other galaxies. Perhaps there are some kinds of debates where people don’t want to find the right answer so much as they want to win the argument. Perhaps humans reason for purposes other than finding the truth — purposes like increasing their standing in their community, or ensuring they don’t piss off the leaders of their tribe. If this hypothesis proved true, then a smarter, better-educated citizenry wouldn’t put an end to these disagreements. It would just mean the participants are better equipped to argue for their own side.

If this is true, then we shouldn’t make any cultural progress on “facts” of reality. We should still be debating the virtues of slavery, whether those women are witches – and whether witches really float or not. Obviously, we’ve made some cultural traction on these issues. How did we do it? Shocking news: some people did some research and issued some publications expressing a contrary opinion on these matters. Then, slowly, people’s opinions changed.

My God. How is that possible!?!?

<end sarcasm>

So can we finally stop telling ourselves that people are locked-in to their views? Divergent_film_posterGood grief – the whole Divergent series is dedicated to this absurdly facile concept. I realize that Divergent kinda-sorta tries to go the other way on this conclusion by the end of the series, but only in the weakest of ways, and only after a trilogy that supposes political differences have a predictable genetic basis (in the future). Ahem.


Addendum – further into Klein’s article, he describes Kahan’s study and results. According to Klein, higher math skill exposed even worse partisan results.

It’s an interesting study, but what I see goes back to the Memory-Prediction model of intelligence presented in On Intelligence, by Jeff Hawkins. Hawkin’s is not a neuroscientist – he’s the inventor of the Palm Pilot’s handwriting recognition scheme. But he was frustrated at the failure of artificial intelligence and set out to try to help break down what, exactly, makes something intelligent. As I think he rightly points out, neural networks as they have so far been constructed get it all wrong, and his book explains why.

He explains (rightly, I think) that what makes something intelligent is the ability to predict what is going on based off a memory system. What I think the above study found was people making an implicit prediction of what they believed to be true, and because they had a salient prediction, those with the math skill to know to diagram the “sample space” chose not to bother due to the salience.


This isn’t politics making us stupid, per se. In my mind, I immediately judge there to be a relevance discrepancy between gun laws vs the effectiveness of cream. When it’s a question asking me to judge the results of an actual experiment, I actually did open Excel (it’s just easier that way) and compute the sample space (I got the “right” answer on the cream question).

But when it’s looking at the results of a gun ban (with a sample size of just 300+ people? Deaths? Assaults?) I intuitively know that there are all sorts of complications to a study trying to build that case, and a sample space of “300” on gun violence could never be statistically significant in modern America.

I don’t know what variation’s Kahan conducted, but this may be the better explanation for why people who are good at math don’t bother – they know better than the “math-tards” right off the bat that there is a significant “prior” here – and the results reflect perfectly reasonable Bayesian computations that Kahan did not think to consider.


Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment

Force = Probability Wave Diffraction = Tunneling Competiton

Technically, I don’t think I’m saying anything new here. However, there are probably people teaching physics who will view what I’m saying here as heretical. I don’t think it is – it’s just a re-framing of statements made by other physicists such as Leonard Susskind, The Black Hole War, and Brian Green, The Frabric of the Cosmos. I’m currently reading Michio Kaku’s Einstein’s Cosmos, which was the muse for this post.

Why does the Earth orbit the Sun? Most will say this is due to the pull of the Sun on the Earth (gravity). Einstein’s world-changing idea was to explain that it isn’t the Sun tugging on the Earth, per se, but rather it was caused by the Sun deforming spacetime, and the Earth’s mass, having a velocity and trying to maintain a straight line path, is being diverted by the curved spacetime. I’d like to go a step or two further in breaking down the summary-analogies we use to understand what’s happening.

Quantum Mechanics has shown us that there’s no such thing as matter. Everything that comprises the Earth is a wave. What we sense as mass is simply the level of interaction with the Higgs field. What we like to think of as particles are actually nebulous probability waves that have no specific location in spacetime – and they routinely tunnel – almost magically teleport – through barriers. In fact, as Feynman taught us, even when an electron makes a “quantum leap” from one orbital to another orbital inside an atom, it isn’t possible for that electron to accomplish the feat via a gradual change in position. What actually happens is:

  1. the energy that is allowing the electron to “leap” induces the spontaneous creation of a positron
  2. the positron and electron annihilate each other
  3. the combined energy of the annihilation induces the spontaneous creation of a seemingly new electron in the new orbital.

The positron here is an example of what are now called “virtual particles” – particles that pop in and out of existence all the time, and amazingly account for most of the “mass” of the matter we think we are comprised of. The process of step 1 thru 3 is also an example of “tunneling.”

Even more amazingly, those electrons and positrons are, again, not matter – they aren’t, actually, particles. They are waves. Probability waves, to be more accurate. And there is no limit to the extent of their probability waves – the force of gravity induced by an electron – though impossibly small – is still not zero, even at a distance of a trillion light years. Likewise, there is always a non-zero chance that an electron can spontaneously teleport – tunnel – to the other side of the moon, Jupiter, galaxy, or universe. The probability is very, very small, but it is never zero. And the smaller the teleport distance, the more likely it is. At lengths near the size of atoms, the probability of electron teleportation becomes so reliable that we build electronic circuits that rely on billions and billions of such teleportation events taking place every second.

So given all that, let me ask again, why does the Earth orbit the Sun?

Think of the Earth and Sun as agglomerations of trillions and trillions of “particles” – and by “particle,” I mean fuzzy, spinning, probability wave balls – like little spinning tornadoes that are spherical and have no boundaries – their spin and presence just becomes increasingly nebulous the further away from the particle you look. The “gravity wells” (often depicted as a funnel) of the Sun and Earth are actually the density of the sum of the presence of all these fuzzy, spinning balls that comprise each astronomical body.

Don’t imagine that electron tunneling is something that happens once every now and then, and mainly only on small scales. Imagine how many electrons comprise the “permanent” mass of the Earth, and then imagine how many additional electrons merely have a relatively plausible probability of tunneling to the location where the Earth is – or will be – from moment to moment. Given that vast quantity, the amazing reality falls into shape.

The Earth orbits the Sun not because of some intangible thing called gravity, or a spacetime funnel-shape. The Earth orbits the Sun because the probability (and therefore frequency) of tunneling events (all fundamental particles, not just electrons) on the Sun-side of the Earth’s path is lower than the number on the opposite side of the Earth from the Sun. The reason this is the case is that the Sun is also comprised of these fuzzy, spinning probability-wave balls (let’s call them “fuzzies”), and there is a slightly lower chance that the Sun’s fuzzies will tunnel to a location that is just outside the Earth’s orbit than a location that is just inside the Earth’s orbit. But tunneling events must result in an arrival into an acceptable location – ie, energy must be conserved, on the whole. This means that there are more “available tunneling locations” on the opposite side of the Earth from the Sun, as opposed to the Sun-side of the Earth.

You might think that this means the opposite side of the Earth from the Sun is like a “vacuum” – or at least more of vacuum than the Sun-side. However, that’s not how it works. What it results in is more (successful) tunneling events on the non-Sun-side, and thus a faster orbital speed on the non-Sun-side than on the Sun-side. The difference in the quantity of events is small – the diameter of the Earth is about 13,000 km, which is about 0.01% the distance to the Sun – but so also is the deviation from a straight line path that the Earth experiences. The “g-force” we (and the Earth) experience from the acceleration caused by the Sun is astonishingly small, despite the fact that it induces our planet into a circular path each a year.

If you remember the “lawnmower” analogy often used when teaching diffraction in beginning physics courses you can see why I called this “Probability Wave Diffraction.” The analogy goes like this: while pushing a lawnmower from a smooth concrete surface to a grass surface, the transition causes more resistance on the lawnmower wheels that are rolling on the grass, causing the lawnmower to turn slightly as it transitions onto the grass, unless you approach the grass at a 90 degree angle, or the person pushing the lawnmower compensates for this torque. The really important part of this analogy – or any description of wave diffraction – is that the cause is due to unequal (propagation) speeds on various parts of the wave or waves.

A simpler view of the Earth orbiting could simply say that, yes, the Earth is comprised of wave-particles, and the spacetime’s funnel shape means that all these wave-particles are traversing a spacetime gradient, and this is what causes the orbital path to deviate from a straight-line trajectory. Specifically, the passage of time is slower at the Earth’s Sun-side than on its opposite-Sun-side, and this differential time-dilation is the true cause of the circular path.

That’s certainly valid – and perhaps it is a contributing factor that I should include in the “tunneling” explanation. However, the time dilation means fewer tunneling events. Also, while the illusion that “particles” traverse linear paths as they travel, what Quantum Mechanics ultimately taught us is that the EPR paper is not valid – particles do not have an independent existence between points of interaction. That is, particles do not “travel in a straight-line” with a definite velocity and known points of interaction – else Heisenberg Uncertainty would be violated. The reality we are living in is that all fundamental particles are, actually, probability waves that tunnel from point to point – even when traveling a straight-line, apparently.

So what is “Tunneling Competition”? This is the term I’m using to try to illustrate the effect caused when multiple particles have the opportunity to tunnel to a particular point in space. The greater the probability that some particle will tunnel to a point in space – from anywhere – the less likely some additional particle will be able to tunnel there, as a consequence of the Pauli Exclusion Principle. The amazing thing is that this mere probability of tunneling (or not) exerts a force in the macro world we’re used to.

A final note: there’s one part of this description I know is very uncertain – the fact that the Pauli Exclusion principle (and so the force arising from tunneling competition) only affects fermions, like elections, but not bosons, such as He4. But He4 is comprised of 4 electrons. The question I still have is how, exactly, does a He4 atom move through space? We know it is a collection of probability waves. My intuition is that despite being a boson, its actual motion through spacetime is the jerky, uncertain tunneling of simpler particles like lone electrons – the kind of motion that leaves particles with mere probabilistic positions and velocities inbetween interactions. If so, it seems the best, most detailed, and accurate way of describing the motion of bodies – even astronomical bodies – is via an agglomeration of “fuzzies,” and the forces they feel arise from tunneling competition and the changes in tunneling probabilities caused by probability wave diffraction of their own probability waves and their neighbors.

For doubters, let me just pose this question to kick-start your imaginations: calculate the probability that the Earth will suddenly lose its orbital velocity. Like calculating the probability that a car will pass through a wall, the probability is not zero. It is astonishingly small – but it is not zero.

(See also Casimir Effect and Zero-Point energy)

Posted in Uncategorized | Tagged , , , , | Leave a comment

Self-Aligning Vehicles

While I’m having an innovation Thursday, I was driving home tonight and noticed that my car was pulling to the right. This was probably a combination of the wind and the slant of the road, but I probably need to get my car’s alignment done, too.

Then it hit me. Why can’t car manufacturers put some lasers on the wheels and make your car automatically re-align the wheels? Would it be that hard?

I don’t think so. And I think there’s a hidden benefit. The car could automatically adjust to slanted roads and sustained cross-winds so that the driver doesn’t need to constantly keep pressure on the steering wheel to keep the car going straight – the vehicle could either tilt or turn the wheel alignment slightly in reaction to conditions where the driver is having to constantly apply pressure on the wheel.

Some will surely complain that this is unnecessary, “And besides – you should always keep your hands on the wheel, anyway!” Yeah. There are people that always drive with their hands at 10-and-2.

Dream on. In the real world, people have to adjust their position for a million different reasons. And if vehicles do a better job of self-aligning the wheels there will be fewer accidents, longer-lasting tire-treads, and better fuel-efficiency.

I’m no prophet of Apple, Inc being the alleged “innovative” company all too many say it is, but there’s no good reason why car manufacturers have been completely incapable of producing real innovation that continually improves the process of driving – for everyone.

Posted in Uncategorized | Tagged , , , , , , , , , , , | Leave a comment