The uncertainty around Knightian uncertainty

Definitions are due

Knightian uncertainty is a proposition that an agent can have a completely unknowable and incalculable uncertainty about an event. This type of uncertainty goes far beyond the colloquial meaning of “uncertainty”, i.e. an event with subjective probability 0<p<1, by refusing to ascribe any probability distribution to a given proposition.

While the little devil of common sense sitting on your shoulder might wisely nod in approval, the bayesian angel on the other shoulder screams: “Impossible!”. A proper bayesian agent is infinitely opinionated and can serve you a probability distribution for any proposition. Anything short of that leads to an exploitable flaw in your decision theory.

So are there fundamentally unknowable events, or is this just sloppy thinking? Are non-bayesian decision theories leaving money on the table, or are bayesians setting themselves up for a ruin via a black swan.

Knightian uncertainty in humans

Let’s start with something uncontroversial: humans, even at their best, are only very weak approximations to a bayesian reasoner and therefore it might not surprise us that they could legitimately exhibit fundamental uncertainty. A good summary, as usually, can be found at Yudkowsky’s When (not) To Use Probabilities – humans are inherently bad at reasoning with probabilities and thus open to Dutch book exploits due to inconsistencies. While some see it as a failure, others say a prudent thinker can rightfully be stubborn and refuse to stick out his neck.

As I side note, we don’t have to require a bound reasoner to literally have a distribution for every event. But shouldn’t he/she be able to compute one when pushed hard enough?

For humans, claiming Knightian uncertainty can be a crude but useful heuristic to prevent playing games, where we might be easy to exploit. Does the concept generalize beyond the quirks of human psychology?

The luxury of a posterior

The role of a decision theory of an optimizing agent is to help him to maximize his utility function. The utility at any given time is also dependent on the environment and therefore it might not be surprising, that under certain conditions it can be beneficial to tailor the decision theory of the agent to the specifics of a given environment.

And some environment might be more hostile to cognition than others. Evolutionary game theory simulations often have bayesian reasoners getting beaten by simpler agents, that dedicate resources to aggressive expansion instead of careful deliberation (I’m quite sure I have this from Artem Kaznacheev, but for my life can’t find the link). Similar situation occurs also in iterated prisoner dilemma tournaments.

While these simulations are somewhat artificial, we might approach these harsh-for-cognition situations in e.g. high frequency trading, where constructing careful posteriors might be a luxury and a less sophisticated, but faster algorithm might win out. As an example, we have a quote (unsourced) from Noah Smith:

Actually, there are deep mathematical (information-theoretical) reasons to suspect that lots of HFT opportunities can only be exploited by those who are willing to remain forever ignorant about the reason those opportunities exist.

Interestingly, a sort of “race-to-the-cognitive-bottom”, might play out in a multipolar artificial intelligence take-off. While a singleton artificial intelligence might nearly optimally allocate part of its resources to improving its decision theory, in a multipolar scenario (fragile as it might be), the winning strategy can be slimming down the cognitive modules to its barest minimum necessary to beat the competition. A biological mirror image to such a scenario is the breakdown of the Spiegelman Monster discovered by Eigen and Oehlenschlager.

Apart from these concerns, another motivation of Knightian uncertainty in the algorithmic trading can be a split between internal and actionable probabilities in some market making algorithms as a protection from adverse selection (more on here).

In summary, not constructing a posterior for a proposition could be a reasonable strategy even for a much wider class of reasoners than quirky humans especially under resource/computation time bounded scenarios. After all, there are no free lunches, including for bayesians.

While these all sounds reasonable, it still does leave me unclear about a general framework to select decision theories when a bayesian approach is too expensive.

Substrate level Knightian uncertainty

There is still one more possible step – moving the uncertainty out of the cranium of agents into the wild world, into the physical reality itself. Scott Aaronson’s fascinating paper The Ghost in the Quantum Turing Machine, is built on the thesis of “Knightian freedom”, an in-principle physical unpredictability that goes beyond probabilistic unpredictability, that is inherent to the quantum nature of physics. As a poor bound cogitor, I’ll proclaim here my own Knightian uncertainty and refuse to fabricate opinions on this thesis [1].

[1] Ok, I found the paper very interesting, but I don’t agree with most of it. Nonetheless, I also don’t feel nowhere near knowledgeable enough to go into a much deeper critique.

1 thought on “The uncertainty around Knightian uncertainty

  1. Pingback: What I learned this week: 16/2015 | suboptimum

Leave a comment