Few notes on crony beliefs

Kevin Simler at Melting Asphalt recently published a very nice summary post – Crony Beliefs. Building on the evo-psych work of Trivers, Haidt and Kurzban he distinguishes between two types of beliefs (“employees”):

  • Meritocratic Beliefs
    • beliefs that are entertained because of their epistemic value, i.e. beliefs that pay their rent in accurate predictions about the world
  • Crony Beliefs
    • beliefs that are not accurate representations of reality, but are kept around because of social value and signaling purposes

The important thing to understand is that Crony Beliefs pay rent too – but not in accurate predictions, but rather as entry and maintenance costs of social bonds.

This observation fits very well with the “bleeding-heart” approach to cognitive biases: while our reasoning is deeply flawed, most heuristics are understandable from the evolutionary point of view, actually work quite often (especially in the simpler ancestral environments) and, as Kevin adds, can even provide a lot of utility, albeit non-epistemic.

There are only few (minor) things I’d like to add to this picture.

Break-downs of meritocracy still exists

From Kevin:

I, for one, typically explain my own misbeliefs (as well as those I see in others) as rationality errors, breakdowns of the meritocracy. But what I’m arguing here is that most of these misbeliefs are features, not bugs. What looks like a market failure is actually crony capitalism. What looks like irrationality is actually streamlined epistemic corruption.

My feeling is that despite this, true meritocracy breakdowns are still more common than true crony beliefs. While there are a LOT of social biases (authority bias, bias from social desirability, conformity bias, groupthink etc.), I think still most of the biases are simply heuristics extrapolating too far & breaking down, i.e. providing neither epistemic nor social value.

Bounded-rationality, compartmenalization and reflective equilibrium

AND ON TOP of that we have a bunch of beliefs that don’t have any causal links leading outside of our skulls, and beliefs that we have, but kind-of-sort-of don’t really know if they are useful in any way, because the slow combine-harvester of System 2 (or some other possibly non-conscious process) didn’t yet mowed over the belief and “decided” to integrate it, throw it away, or leave around unattached.


I’d really like to learn more about how this bounded belief examination and integration works.

Mindsets vs. beliefs

Finally a point that Julia Galef raised: beliefs are possibly not quite stable and the same metaphor could be applied to mindsets instead.

Instead of crony/meritocratic employees (beliefs), you might have a crony/meritocratic HR & hiring process.

Julia’s metaphor of scout vs. soldier mindset roughly maps to meritocratic vs. crony beliefs too.

Kevin’s article is definitely worth reading in its entirety, because it gives a very vivid metaphor to understand the interplay of the two types of beliefs.

When Laws of Nature must concede to the Laws of Metre

Here is a wonderful anecdote about Charles Babbage: he sent the following letter to Lord Alfred Tennyson about an factual imprecision in his poem “The Vision of Sin” in the following couplet:

Every minute dies a man,
Every minute one is born

I need hardly point out to you that this calculation would tend to keep the sum total of the world’s population in a state of perpetual equipoise, whereas it is a well-known fact that the said sum total is constantly on the increase.

I would therefore take the liberty of suggesting that in the next edition of your excellent poem the erroneous calculation to which I refer should be corrected as follows:

Every minute dies a man,
And one and a sixteenth is born

I may add that the exact figures are 1.167, but something must, of course, be conceded to the laws of metre.

Babbage reveals himself not only as the superior poet by being faithful to truth as the highest inspiration, but also as a superior scientist by respecting the limits of his craft.

Sorry, got it wrong:

Babbage reveals himself not only as the superior scientist by being faithful to truth as the highest inspiration, but also as a superior poet by respecting the limits of his craft.


The Ornithology of Epistemology

Philosophy of science is as useful to scientists as ornithology is to birds. 

— Richard Feyman

There is an interesting flock of bird concepts in epistemology, philosophy of science and risk theory:

Russell’s Turkey

Black Swan

Pink Flamingo

Ostrich Effect

Raven Paradox

283px-kaninchen_und_enteDespite decades of intensive ornitho-epistemological research, the bird-ness of Rabbitduck [1] could not been firmly established yet. Every time we think we nailed it, the the rabbit-ness wins over and we are back to square one. More research is needed.

[1] We are sad to report, but there is still a small, but very vocal part of our research community, that insists on the clearly ignorant “Duckrabbit” nomenclature.

We also don’t subscribe to the wishy-washy, cheap consensus of the Copenhagen interpretation, that claims that the creature is both a rabbit and a duck.

Sararīman in the high agency economy and in the roboconomy

1. Rise of the freelancer

Taleb just reprinted his excellent How To Legally Own Another Person essay at Medium.com. It originally appeared at Evonomics, which is very much worth following.

Developed nations are increasingly relying on freelancer / contractor works (40% of America’s workforce will be freelancers by 2020 says Intuit’s report). This puts new demands on the workforce – high agency, independence, risk tolerance. Reputation is already now one of the main currencies (think Uber, Airbnb et al.).

2. Internal and external coordination costs

Is this the death of the poor Sararīman (salaried employee)? How far can this trend go?

Coase’s Theory of the Firm did tap into an important insight from complex systems: Firms grow to achieve a dynamical equilibrium between internal and external coordination costs, between economies and dis-economies of the scale.

The technology-enabled, highly networked world with its remote communication and work tools and reputation-rings shifts the set point of most companies towards a more distributed, freelancer-reliant structure.

Nonetheless there are limits to this shift as coordination cost will never quite reach zero. More importantly, there are limits on the size of the workforce pool, that does have the required levels of agency and risk tolerance.

3. Rise of the robo-worker

There is one more, overwhelming trend that is coming to play. The rise of the freelancer (and the death of sararīman) will co-evolve with automatization. The Uber freelancers can already start competing with their robotic colleagues.

The salaried man will not live long enough to die by glorious Karōshi (death by overworking), but rather die the double death of outsourcing to a freelancer or a robot.

4. The Tale of the Slave

While I don’t want to get into the neo-luddite debate now, I do want to link Taleb’s article with Nozik’s fantastically funny Tale of the Slave. It is a very short read, but very much worth it. The point being, that maybe the salaryman-hood is not such such a great thing to cling too anyway.

5. The Diogenesian or Epictetian future of the salaried worker

Ultimately, the boundary conditions for the “freed” (= kicked-out, redundant) Sararīman are:

  1. living in barrel like fellow slave Diogenes (the neo-luddite scenario)
  2. or flourish like ex-slave Epictetus, becoming a wealthy and self-actualized freeman

Here, I’m more on the cautious optimist’s side.

While not everybody will become an influential philosopher (artist, writer…), my hope is that we will be wise enough to use the surplus generated by automatized technology to make the barrel really comfty (good Wifi and VR googles are included of course).

Wait, did I say optimistic?




Niches between two absurd positions


In many discussions we are drawn to extreme boundary values. Here is a possible dilemma for a potential parent:

“If all comes down to genetics, there is nothing I can do to really affect my child. It is a total gamble. I have no control.”

But imagine it was 100% nurture: every interaction with your child, every word you say shapes its personality – permanently and possibly irreversibly.

Who could bear this kind of responsibility?


This can be extended to everything in your life. (Learned) helplessness is a serious condition, when one believes that everything that happens to him is completely out of his/her control.

The other extreme is Total Responsibility – you are responsible for everything that happens to you, to your current conditions and future prospects.

Again an immense burden.


As it is often the case, the truth is in-between: Genetics does determine most of the variance in traits (even complex ones) and its effect depends on time and overall socio-economic status (hereditability increases with both). It seems that nurture doesn’t do much on long term – but at least something short term. Beyond partner selection, one is largely (but not completely) absolved of metaphysical responsibility (definitely not the case for physical responsibility – shelter, food, protection).

For life outcomes it is similar – any outcome is a mixture of elements you control (your skills, resources you invest in it etc.) and random noise. The mix depends on the task/decision on hand, going from pretty much all noise (playing roulette), to much more controlled environments (but never completely without noise). You have some responsibility for the outcome, not total.

We are caught somewhere between two absurd positions, reminding me of:

Doubt is not a pleasant condition, but certainty is absurd.

— Voltaire.


Possibiliy of an injection attack on self-driving deep neural networks

Image processing deep neural networks can be catastrophically confounded by imperceptibly small perturbations in the input image, as demonstrated by Szegedy et al. 2013.

Nguyen et al. 2014 used genetic algorithms to purposely evolve abstract images that well trained neural networks confounded with real objects.


Nguyen et al. 2014: Image evolved so that a neural network miss-classifies it as a guitar. “Swerve left” command could in principle be evolved in a similar way.


Using these techniques it could in principle be possible to construct artificial images (or video sequences) which when injected into the visual field of a self driving car could cause unwanted, possibly dangerous behavior (such as sudden swerve into opposing traffic).

It is theoretically possible (but likely practically very hard), to create adversarial images that would have the same catastrophic effect, even if covering only part of the visual field of the car (e.g. by holding up a printout of such an image at roadside).

Speaking of injections – an older, “fun” idea are SQL injections on licence plates as a way to mess with automated traffic surveillance systems (plate gets OCR-ed, and written into the database – which possibly triggers a drop table if unguarded). This is a special case of injection attacks – the adversarial data payload is a code snippet (a so called “code injection”).


A “Licence plate” with an SQL injection attack as a way to fight back traffic cameras.

(I discuss “psychology” of deep learning networks also here).