Few notes on crony beliefs

Kevin Simler at Melting Asphalt recently published a very nice summary post – Crony Beliefs. Building on the evo-psych work of Trivers, Haidt and Kurzban he distinguishes between two types of beliefs (“employees”):

  • Meritocratic Beliefs
    • beliefs that are entertained because of their epistemic value, i.e. beliefs that pay their rent in accurate predictions about the world
  • Crony Beliefs
    • beliefs that are not accurate representations of reality, but are kept around because of social value and signaling purposes

The important thing to understand is that Crony Beliefs pay rent too – but not in accurate predictions, but rather as entry and maintenance costs of social bonds.

This observation fits very well with the “bleeding-heart” approach to cognitive biases: while our reasoning is deeply flawed, most heuristics are understandable from the evolutionary point of view, actually work quite often (especially in the simpler ancestral environments) and, as Kevin adds, can even provide a lot of utility, albeit non-epistemic.

There are only few (minor) things I’d like to add to this picture.

Break-downs of meritocracy still exists

From Kevin:

I, for one, typically explain my own misbeliefs (as well as those I see in others) as rationality errors, breakdowns of the meritocracy. But what I’m arguing here is that most of these misbeliefs are features, not bugs. What looks like a market failure is actually crony capitalism. What looks like irrationality is actually streamlined epistemic corruption.

My feeling is that despite this, true meritocracy breakdowns are still more common than true crony beliefs. While there are a LOT of social biases (authority bias, bias from social desirability, conformity bias, groupthink etc.), I think still most of the biases are simply heuristics extrapolating too far & breaking down, i.e. providing neither epistemic nor social value.

Bounded-rationality, compartmenalization and reflective equilibrium

AND ON TOP of that we have a bunch of beliefs that don’t have any causal links leading outside of our skulls, and beliefs that we have, but kind-of-sort-of don’t really know if they are useful in any way, because the slow combine-harvester of System 2 (or some other possibly non-conscious process) didn’t yet mowed over the belief and “decided” to integrate it, throw it away, or leave around unattached.

combine

I’d really like to learn more about how this bounded belief examination and integration works.

Mindsets vs. beliefs

Finally a point that Julia Galef raised: beliefs are possibly not quite stable and the same metaphor could be applied to mindsets instead.

Instead of crony/meritocratic employees (beliefs), you might have a crony/meritocratic HR & hiring process.

Julia’s metaphor of scout vs. soldier mindset roughly maps to meritocratic vs. crony beliefs too.

Kevin’s article is definitely worth reading in its entirety, because it gives a very vivid metaphor to understand the interplay of the two types of beliefs.

The Ornithology of Epistemology

Philosophy of science is as useful to scientists as ornithology is to birds. 

— Richard Feyman

There is an interesting flock of bird concepts in epistemology, philosophy of science and risk theory:

Russell’s Turkey

Black Swan

Pink Flamingo

Ostrich Effect

Raven Paradox

283px-kaninchen_und_enteDespite decades of intensive ornitho-epistemological research, the bird-ness of Rabbitduck [1] could not been firmly established yet. Every time we think we nailed it, the the rabbit-ness wins over and we are back to square one. More research is needed.

[1] We are sad to report, but there is still a small, but very vocal part of our research community, that insists on the clearly ignorant “Duckrabbit” nomenclature.

We also don’t subscribe to the wishy-washy, cheap consensus of the Copenhagen interpretation, that claims that the creature is both a rabbit and a duck.

Against Beauty II

This blog just recently celebrated its first year anniversary!

In my first post, Against Beauty, I’ve argued that beauty is not likely a good criterion for scientific theories.

It is just telling, that now a year later I came across a quote from one of my heroes – Ludwig Boltzmann:

If you are out to describe the truth, leave elegance to the tailor.

— Ludwig Boltzmann

Incidentally, Lisa Randall talks along similar lines in a recent episode of the On Being podcast:

[…] you can frame things so that they seem more beautiful than they are, or less beautiful than they are. For science to be meaningful, you want to have as few ingredients as possible to make as many predictions as possible with which you can test your ideas. So I think that’s more the sense — I think that’s what people are thinking of. And simplicity, by the way, isn’t always beauty.

While I agree with the beauty part, I’m of different opinion on the ultimate role of simplicity in evaluating scientific theories (understood here as the Kolmogorov-Chaitin complexity of the given model).

As we learn more about the universe we necessarily will have to abandon effective theories that are “human readable”. The world is too complex, to be describable by human-mind-sized models.

For every complex problem there is an answer that is clear, simple, and wrong.

— H. L. Mencken

Beauty and simplicity are often conflated – I’m also not always clear on which one I mean).

My current thinking is that if we consider beauty of a theory to be the ratio of the model’s explanative (predictive) power to its Kolmogorov length, then this will remain a relevant model selection criterion.

But I think, that ultimately we will have to say goodbye to the notion that the Kolmogorov complexity of models is not allowed to cause a stack overflow in human brains.

The uncertainty around Knightian uncertainty

Definitions are due

Knightian uncertainty is a proposition that an agent can have a completely unknowable and incalculable uncertainty about an event. This type of uncertainty goes far beyond the colloquial meaning of “uncertainty”, i.e. an event with subjective probability 0<p<1, by refusing to ascribe any probability distribution to a given proposition.

While the little devil of common sense sitting on your shoulder might wisely nod in approval, the bayesian angel on the other shoulder screams: “Impossible!”. A proper bayesian agent is infinitely opinionated and can serve you a probability distribution for any proposition. Anything short of that leads to an exploitable flaw in your decision theory.

So are there fundamentally unknowable events, or is this just sloppy thinking? Are non-bayesian decision theories leaving money on the table, or are bayesians setting themselves up for a ruin via a black swan.

Knightian uncertainty in humans

Let’s start with something uncontroversial: humans, even at their best, are only very weak approximations to a bayesian reasoner and therefore it might not surprise us that they could legitimately exhibit fundamental uncertainty. A good summary, as usually, can be found at Yudkowsky’s When (not) To Use Probabilities – humans are inherently bad at reasoning with probabilities and thus open to Dutch book exploits due to inconsistencies. While some see it as a failure, others say a prudent thinker can rightfully be stubborn and refuse to stick out his neck.

As I side note, we don’t have to require a bound reasoner to literally have a distribution for every event. But shouldn’t he/she be able to compute one when pushed hard enough?

For humans, claiming Knightian uncertainty can be a crude but useful heuristic to prevent playing games, where we might be easy to exploit. Does the concept generalize beyond the quirks of human psychology?

The luxury of a posterior

The role of a decision theory of an optimizing agent is to help him to maximize his utility function. The utility at any given time is also dependent on the environment and therefore it might not be surprising, that under certain conditions it can be beneficial to tailor the decision theory of the agent to the specifics of a given environment.

And some environment might be more hostile to cognition than others. Evolutionary game theory simulations often have bayesian reasoners getting beaten by simpler agents, that dedicate resources to aggressive expansion instead of careful deliberation (I’m quite sure I have this from Artem Kaznacheev, but for my life can’t find the link). Similar situation occurs also in iterated prisoner dilemma tournaments.

While these simulations are somewhat artificial, we might approach these harsh-for-cognition situations in e.g. high frequency trading, where constructing careful posteriors might be a luxury and a less sophisticated, but faster algorithm might win out. As an example, we have a quote (unsourced) from Noah Smith:

Actually, there are deep mathematical (information-theoretical) reasons to suspect that lots of HFT opportunities can only be exploited by those who are willing to remain forever ignorant about the reason those opportunities exist.

Interestingly, a sort of “race-to-the-cognitive-bottom”, might play out in a multipolar artificial intelligence take-off. While a singleton artificial intelligence might nearly optimally allocate part of its resources to improving its decision theory, in a multipolar scenario (fragile as it might be), the winning strategy can be slimming down the cognitive modules to its barest minimum necessary to beat the competition. A biological mirror image to such a scenario is the breakdown of the Spiegelman Monster discovered by Eigen and Oehlenschlager.

Apart from these concerns, another motivation of Knightian uncertainty in the algorithmic trading can be a split between internal and actionable probabilities in some market making algorithms as a protection from adverse selection (more on here).

In summary, not constructing a posterior for a proposition could be a reasonable strategy even for a much wider class of reasoners than quirky humans especially under resource/computation time bounded scenarios. After all, there are no free lunches, including for bayesians.

While these all sounds reasonable, it still does leave me unclear about a general framework to select decision theories when a bayesian approach is too expensive.

Substrate level Knightian uncertainty

There is still one more possible step – moving the uncertainty out of the cranium of agents into the wild world, into the physical reality itself. Scott Aaronson’s fascinating paper The Ghost in the Quantum Turing Machine, is built on the thesis of “Knightian freedom”, an in-principle physical unpredictability that goes beyond probabilistic unpredictability, that is inherent to the quantum nature of physics. As a poor bound cogitor, I’ll proclaim here my own Knightian uncertainty and refuse to fabricate opinions on this thesis [1].

[1] Ok, I found the paper very interesting, but I don’t agree with most of it. Nonetheless, I also don’t feel nowhere near knowledgeable enough to go into a much deeper critique.

Probabilitas Realis

There is no way, however, in which the individual can avoid the burden of responsibility for his own evaluations. The key cannot be found that will unlock the enchanted garden wherein, among the fairy-rings and the shrubs of magic wands, beneath the trees laden with monads and noumena, blossom forth the flowers of PROBABILITAS REALIS.

With these fabulous blooms safely in our button-holes we would be spared the necessity of forming opinions, and the heavy loads we bear upon our necks would be rendered superflous once and for all. Bruno de Finetti Theory of Probability, Vol 2