Authentic and Inauthentic Loyalty

Loyalty is a desirable core value, but I think, that sometimes it is misunderstood. I’d wish to distinguish between two types of loyalty – authentic and inauthentic.

Authentic loyalty is the stance one thinks of when we talk about loyalty – the steadfast quality of sticking with a person, group or organisation, that comes from a deep feeling of caring and belonging.

Inauthentic loyalty looks on the surface the same – grin and bear, stick through good and bad times, . Under the surface, it can however harbor resentment, contempt and even anger.

If you consider loyalty as your core value, it is worth investigating what is its ultimate source.

 The near enemy of loyalty is complacency

Buddhism has an interesting concepts of “near enemies”. These are undesirable emotions that can be confused with noble ones. For example, compassion is a desirable emotion and its near enemy is pity (because pity is is divise at its core).

I’d suggest that near enemy is of loyalty is compacency.

What on looks like steadfast loyalty, can under the surface be complacency based on fear, lazyness or other source of resistance to change. This “poisons” the relation and causes slow, silent damage to both sides (individuals, groups, organisations).

Can inauthentic loyalty turn into authentic?

Sometimes “fake-it till you make-it” does indeed work. My suspicion is that inauthentic loyalty can indeed sometime turn into the deep, authentic stance. However, I expect this is rare in personal relationships and pretty much non-existent towards organisations and business.

Resentment and contempt are particularly hard emotions to deal with and time tends to deepen them rather than heal.

So if the way “through” is unlikely, what other possibility can we have?

Escaping the trap of inauthentic loyalty

As with any “near enemy”, inauthentic loyalty can only be recognized with patient, self-compassionate, introspective mindfulness. The recognition is often not a singular moment a breakthrough, but a slow dawning, possibly painful and with backtracks.

Ultimately moving beyond inauthentic loyalty brings liberation and eventually growth, but it is not an easy process.

 

Advertisements

We have stumbled into the era of machine psychology

Emergent science in an emergent world

When describing complex emergent systems, science has to switch from lower to higher level descriptions. Here is a typical example of such transitions:

  1. We go from physics to chemistry when we encounter complex arrangements of large number of atoms/molecules.
  2. Complex chemistry in living systems is then described in terms of biology.
  3. Complex (neuro-) biology in human brains finally gives raise to the field of psychology.

Of course, the world is not discrete and the transitions between the fields are fuzzy (think about the chemistry-biology shoreline of bio-macromolecules and cytology). And yes, the (mostly infertile) philosophical wars on the ontology of emergence are still being waged. Yet, nobody would deny the epistemic usefulness of higher level descriptions. Every transition to higher order description brings its own ‘language’ describing the object as well as a suite of research methods.

In this game, it is however very easy to miss the forest (high-level) for the trees (low-level). One interesting example I’ve noticed recently is in the field of machine learning. When studying deep neural networks (DNNs), we have already unknowingly stumbled into such a transition. Historically, most of the research has been done on the “biology” of the DNNs – the architecture of the networks, activation functions, training algorithms etc. (and yes, saying “biology” is bio-chauvinistic on my part. We should find a better word!)

Recently however, we are more and more tapping into the “psychology” of the neural networks.

Machine psychology

The deep architectures that are now in use aren’t reaching anywhere near the complexity of human brains, yet. However, with connections in the billions (here is an early example), they are too complex, too opaque, for a low-level description to be sufficient for their understanding. This lead to a steady influx of research strategies, that shift the approach from the bottom-up understanding of “machine biology” to a more top-down, “input-output”, strategy typical for psychology.

Of course, neural networks are commonly, though not quite deservedly, described as “black boxes”. And historically, parts of psychology had its flirtations with cybernetics. But it is only recently that the we see a curious methodological convergence between these two fields as machine learning is starting to adopt methods of psychology.

The interesting distinction between machine and human psychology is that we have a direct access to “brain” states of the network (inputs and activation of each neuron). With machine psychology, we are now shifting attention to their “mental” states, something that is accessible only with higher order, indirect methods.

Psychology of machine perception

A first example of the convergence comes from the psychology of perception.

Deep neural networks have revolutionized the field of computer vision by crushing competing approaches in all benchmarks (see e.g. last year’s ImageNet competition). Yet a deeper intuition for how the DNNs are actually solving image classification requires techniques similar to those used in psychology of perception

As an example: recently, an “input-output” strategy yielded an attack on neural network image classification developed by Szegedy et al. 2013. In this work, they took correctly classified images. modified them imperceptibly, so that the trained network got completely confused (see Fig 1a. below). While on the surface level such a confusion seems alarming, one should just remind oneself of the many quirks of the human visual cortex (Fig 1b.)

szilard

Fig 1a: Example from Szegedy et al. 2013: Image on the left is correctly classified by a neural net as school bus. On the right side the imperceptably modified image is however classified as an ostrich. The middle panel shows the pixel difference of the two images magnified 10x.

618px-Grey_square_optical_illusion

Fig 1b: Your visual cortex classifies the colors of fields A and B as distinct. They are the same.

Nguyen et al. 2014, then turned this game around and used genetic algorithms to purposely evolve abstract images that well trained neural networks confound with real objects. Again examples for a DNN and human visual cortex below (Fig. 2a and 2b).

Image evolved so that a neural network miss-classifies it as a guitar.

Fig 2a: Image evolved so that a neural network miss-classifies it as a guitar.

A lamp miss-classified by your dirty mind.

Fig 2b: An image of a lamp miss-classified by your dirty, dirty mind.

Gestalt psychology for hierarchical feature extraction?

These confounding attacks on classifiers are very important, since deep neural nets are being increasingly employed in the real world. Better understanding of machine perception is required to make the algorithms more robust to avoid fraud (some examples here).

The reason why image classification works so well with deep architectures is the ability to automatically extract hierarchies of features from images. To make them more robust to attacks, requires an improvement in integration of these hierarchies into “global wholes”, well summarized by the mantra of gestalt psychology by Kurt Koffka, “The whole is other than the sum of the parts” (not “The whole is greater than the sum of its parts”).

Psychometrics of neural networks

The cross-fertilization of machine learning by psychology doesn’t end with perception theory.

Measurement of psychological traits are the bread-and-butter of psychometry and the crown jewel is of course intelligence testing. This is even more salient for the field of artificial intelligence. In an early example, Wang et al. 2015 made just recently headlines (e.g. here) by claiming to beat average Amazon Mechanical Turk performance on a verbal IQ test.

Oddly enough, I haven’t yet found a reference using deep nets on Raven’s progressive matrices. This seems like a very obvious application for deep networks as Raven’s matrices are small, high-contrast images and successful solution requires extraction of multi-level hierarchies of features. I expect that DNNs should very soon blow humans out of water in this test.

Raven’s matrices are the go to test for human intelligence with g-loading around 0.8 and virtually no cultural bias. Such an experiment would likely show, that the nets to achieve IQ 200+ in a very vivid illustration of the relationship between proxies for g and the actual “general intelligence” – the holy grail of artificial general intelligence (AGI) research.

Here is then a nice summer project: put together a DNN for solving Raven’s matrices. I even recall a paper on machine generation of test examples so enough training data will not be a problem!

Deep nets and Raven’s progressive matrices are made for each other.

Machine psychotherapy, creativity and aesthetics

On a joking note – if there is machine psychology, could there be also machine psychotherapy? How could a venerable Freudian help his DNN clients?

There are some very playful examples done with generative models (based on recurrent deep networks), e.g. text generation à la Shakespear/Graham/Wikipedia. A machine therapist will definitely be able to use their good old tools of word association games and automatic writing to diagnose whatever will be the digital equivalent of Oedipus complex of his machine patients.

b0a94d76a05bd77491c963328e255554

Did you again dream about electric sheep Mr. Android?

Even the good old cliché of dream interpretation can be brought out of retirement.
Geoffrey Hinton spoke about machine dreams long time ago. And the psychologist are already picking up on this:

One of the areas that I’ve been looking at recently is machine dreaming, the question whether AI systems are already dreaming. There’s little question that they meet our criteria for what a dream is, they meet all our definitional criteria. There’s better evidence really that machines, AI systems, are dreaming, than there is that animals are dreaming that are not human.

— Associate Professor of Psychology, James Pagel on the “All in the Mind” podcast.

The excellent paper by Google researchers, Inceptionism: Going Deeper into Neural Networks,  shows beautiful demonstrations of DNN fantasies, dreams and pareidolia. The psychology of digital psychedelic experience is close too.

What deep neural nets dream about actually.

What deep neural nets dream about actually.

This section is of course tongue in cheek, but its aim is to illustrate, that already now, the state-of-the-art DNNs can achieve very rich “mental” states.

Sidenote: speaking of machine therapy, the other way around, i.e. machines being therapist to humans, is a promising researched field. Indeed they seem to come a long way since the command line therapist and the `M-x doctor` (for the Emacs fans out there).

Machine ethology. Machine sociology. Machine etiquette. Machine politics.

Machines are already talking to each other a great deal: think of the internet, communication networks, or the budding world of internet of things. For now, the conversation is only between agents of low sophistication using simple, rigid protocols. We could perhaps already talk about machine ethology, maybe even nascent sociology. TCP/IP is an example of simple machine etiquette.

But the real deal will come when the artificial agents get more sophisticated (i.e. DNNs) and their communication bandwidth increases.

The final step is achieved, when the agents start to create mental self-models and also models of the other agents they are communicating with. The gates of social psychology, sociology and politics will be then pried wide open for our machine comrades.

Future of hard science is soft science?

Will your AI team soon have to hire a machine psychologist? Maybe so.
It is fascinating, that the hardest of hard fields – mathematics/statistics/AI research/software engineering in the areas of AI converges on methods from soft science.

Soft-sciences, mind you, not humanities.

Turning consumer materialism into mindfulness using its own weapons

Summary

Using implementation intention, we can create a distributed meditation practice. With a bit of semiotic Aikido, we’ll then turn corporate logos into micro-meditation triggers. It will give more sense when your read it, I promise.

Benefits of meditation

Currently, there is a heaping mountain of evidence for benefits of meditation. Instead of a careful bibliography, I’ll just lazily point in the general direction of wikipedia. While the recommended daily doses is about ~20 minute of sitting mindfulness meditation, this stays an elusive goal for many.

Several authors came up with different, more portable versions of mindfulness practice (e.g. 2 minute meditation or the “Mindful minute”), as replacements of sitting meditation or as an addition to the main daily practice.

Micro-practice and distributed meditation

The limiting case to this minimization trend is what I call “Just one breath practice[1]. On various occasions during your day take one deep breath – in-and-out – paying full, undivided, non-judgmental attention to it and your bodily sensation.

Just one single breath, that is not asking too much, right?

The aim of this micro-practice is to refocus the mind, decrease stress and bring back the attention to momentary, mindful experiencing. We try to insert as many mindful breaths into our day as possible in hope to spread out the benefits of meditation through the whole day. Paying attention to many individual breaths during your day creates a distributed meditation!

The programmable self

The only tricky thing is to actually remember to do these mindful breaths in the maelstrom of your daily life. That is where “Implementation Intention” comes into play.

An implementation intention is a psychological if-then-rule, whit which you condition yourself to execute a certain desired behavior, when a simple trigger occurs in your environment. An example: “if I come home, then I will do 5 push-ups”.

The idea behind it is, that willpower is a very scarce resource (in first approximation linked to glucose levels in your neo-cortex). You increase the chance to create a desired habit, if you replace the need for willpower with an automated reactions to certain triggers.

For more details on implementation intention triggers see for example this review paper.

The next catch is where to find good triggers. The standard trigger classes are: time, place, emotional state and your reactions. But can we come up something more fun?

The marks of the devil

Is there something in our daily environment that is ever-present and glaringly obvious? Something that could function as a good visual cue for our micro-mindfulness practice?

Fortunately, the gentle, kindly giants – our multinational corporations – have not spared expenses and focus groups to develop and placate our environment (both off- and online) with excellent visual triggers. We’re so lucky!

The marks of the devil. Or pavlovian cues to a more spiritual life? I always confuse the two.

 We can now build triggers such as “If I see the Apple logo, I take a mindful breath”. “If I see a McDonald M, I take a mindful breath.” etc.

apple-triggermcdonalds-trigger

You can even use the breath, to mindfully examine our undesired cravings for a burger or a shiny gadget!

With this semiotic Aikido, we re-purpose the very symbols of consumer materialism to a more mindful, compassionate ends! High-tech corporate memes used as execution hooks for 2000+ year old spiritual memes.

This is definitely a flower-child/hippy idea, but I sort of like the irreverence of it. Or take it as an anti-consumerist  performance-art project! 🙂

Additional notes

1. What about creating triggers for the logo of your cigarette brand? When you reach for another cigarette and notice the logo on the pack, you’ll be compelled to take a mindful breath before lighting up. This creates an intention gap, that increases the chance of letting go/transforming your impulse. Worst case, use it to smoke mindfully.

marlboro-trigger

2. Other obvious iconography that can be used as visual triggers are traffic signs – though I obviously do not recommend this, especially not for drivers. 🙂

3. Of course, you can use implementation intention to create a trigger for you standard practice, e.g. “If I finish cleaning my teeth in the morning, then I go meditate for 20 minutes”.

4. We might return to this topic in the future, to discuss how to use another trick of cognitive psychology – spaced repetition – to efficiently implement mental triggers in our wetware (brains).

[1] The term comes up in different connotations. I believe I heard somebody in the same context as here, but can’t seem to find the reference. Back to the text.