Startup culture as an ideology

Disclaimer: 

The article only argues about the “mechanics” of the startup culture. There is no value judgement or any implied relation to the political ideologies mentioned or alluded to below. I think this should be obvious, but you never know…  

I.

Most participants, twenty-something, coming from all corners of Europe, never experienced the old era themselves. Political history wasn’t a big conversation topic. Yet the shadow of the complicated ideological heritage could be felt here and there in the background of the startups scrum in Poland. The very first presentation of the meeting even mentioned Lech Wałęsa, though I assume this was an unknown name for most.

pivot2

For mine and younger generations, that didn’t experience the previous regimes first hand, these stay as a weird enigma. They are present in the old buildings in our cities, in jokes, stories, occasional TV documentary or old movies.

There is also a sense of bewilderment. How could reasonable people believe, and even more importantly, act on mere ideas? Can we nowadays, in the safe and developed West, understand the allure of ideologies on the gut (alief) level?

II.

It is only naive to not think that we are no longer submerged in ideologies on daily basis. However most of them feel more subdued, more passive. The most pervasive (and least noticeable) shapes our consumer behaviors and everything that is enabling them. This makes the broad “consumerist capitalism” memeplex particularly dangerous, but also not sufficiently visceral for demonstration on the alief level.

Political ideology in the western world is similarly largely in stealth regime. Most of the political action nowadays consists of reposting some image macros on facebook or some flamewars. Not really the blood and sweat of bygone eras.

Religion? Not for most of the young generation of developed countries.

scale

III.

That is what is striking on the “startup culture”. It captures the hearts and minds of the youth, gives them dreams and hopes, gives them a struggle and an enemy. There are revered thinkers, books that everybody should read and most can quote. Visionary pitches, lectures and essays. Memes and symbolism.

Details can be discussed and there are variations here and there, but the core message is usually intact.

And it doesn’t end with ideas.

It motivates to vigorous action, endless hours of hard work, enduring months and years of hardship. All in the name of greater good – creating value to the customer.

The end is noble and the means are obviously correct.

launch_early_text

IV.

You are twenty-something and wonder how reasonable people could have been in throes of ideologies like communism and national socialism.

Now feel the rush of adrenaline, the excitement, the call for action, when you think about the world of startups. Feel the future you can build, the good you can create, the reward you can reap.

This is how an ideology feels like.

validate

Advertisements

We have stumbled into the era of machine psychology

Emergent science in an emergent world

When describing complex emergent systems, science has to switch from lower to higher level descriptions. Here is a typical example of such transitions:

  1. We go from physics to chemistry when we encounter complex arrangements of large number of atoms/molecules.
  2. Complex chemistry in living systems is then described in terms of biology.
  3. Complex (neuro-) biology in human brains finally gives raise to the field of psychology.

Of course, the world is not discrete and the transitions between the fields are fuzzy (think about the chemistry-biology shoreline of bio-macromolecules and cytology). And yes, the (mostly infertile) philosophical wars on the ontology of emergence are still being waged. Yet, nobody would deny the epistemic usefulness of higher level descriptions. Every transition to higher order description brings its own ‘language’ describing the object as well as a suite of research methods.

In this game, it is however very easy to miss the forest (high-level) for the trees (low-level). One interesting example I’ve noticed recently is in the field of machine learning. When studying deep neural networks (DNNs), we have already unknowingly stumbled into such a transition. Historically, most of the research has been done on the “biology” of the DNNs – the architecture of the networks, activation functions, training algorithms etc. (and yes, saying “biology” is bio-chauvinistic on my part. We should find a better word!)

Recently however, we are more and more tapping into the “psychology” of the neural networks.

Machine psychology

The deep architectures that are now in use aren’t reaching anywhere near the complexity of human brains, yet. However, with connections in the billions (here is an early example), they are too complex, too opaque, for a low-level description to be sufficient for their understanding. This lead to a steady influx of research strategies, that shift the approach from the bottom-up understanding of “machine biology” to a more top-down, “input-output”, strategy typical for psychology.

Of course, neural networks are commonly, though not quite deservedly, described as “black boxes”. And historically, parts of psychology had its flirtations with cybernetics. But it is only recently that the we see a curious methodological convergence between these two fields as machine learning is starting to adopt methods of psychology.

The interesting distinction between machine and human psychology is that we have a direct access to “brain” states of the network (inputs and activation of each neuron). With machine psychology, we are now shifting attention to their “mental” states, something that is accessible only with higher order, indirect methods.

Psychology of machine perception

A first example of the convergence comes from the psychology of perception.

Deep neural networks have revolutionized the field of computer vision by crushing competing approaches in all benchmarks (see e.g. last year’s ImageNet competition). Yet a deeper intuition for how the DNNs are actually solving image classification requires techniques similar to those used in psychology of perception

As an example: recently, an “input-output” strategy yielded an attack on neural network image classification developed by Szegedy et al. 2013. In this work, they took correctly classified images. modified them imperceptibly, so that the trained network got completely confused (see Fig 1a. below). While on the surface level such a confusion seems alarming, one should just remind oneself of the many quirks of the human visual cortex (Fig 1b.)

szilard

Fig 1a: Example from Szegedy et al. 2013: Image on the left is correctly classified by a neural net as school bus. On the right side the imperceptably modified image is however classified as an ostrich. The middle panel shows the pixel difference of the two images magnified 10x.

618px-Grey_square_optical_illusion

Fig 1b: Your visual cortex classifies the colors of fields A and B as distinct. They are the same.

Nguyen et al. 2014, then turned this game around and used genetic algorithms to purposely evolve abstract images that well trained neural networks confound with real objects. Again examples for a DNN and human visual cortex below (Fig. 2a and 2b).

Image evolved so that a neural network miss-classifies it as a guitar.

Fig 2a: Image evolved so that a neural network miss-classifies it as a guitar.

A lamp miss-classified by your dirty mind.

Fig 2b: An image of a lamp miss-classified by your dirty, dirty mind.

Gestalt psychology for hierarchical feature extraction?

These confounding attacks on classifiers are very important, since deep neural nets are being increasingly employed in the real world. Better understanding of machine perception is required to make the algorithms more robust to avoid fraud (some examples here).

The reason why image classification works so well with deep architectures is the ability to automatically extract hierarchies of features from images. To make them more robust to attacks, requires an improvement in integration of these hierarchies into “global wholes”, well summarized by the mantra of gestalt psychology by Kurt Koffka, “The whole is other than the sum of the parts” (not “The whole is greater than the sum of its parts”).

Psychometrics of neural networks

The cross-fertilization of machine learning by psychology doesn’t end with perception theory.

Measurement of psychological traits are the bread-and-butter of psychometry and the crown jewel is of course intelligence testing. This is even more salient for the field of artificial intelligence. In an early example, Wang et al. 2015 made just recently headlines (e.g. here) by claiming to beat average Amazon Mechanical Turk performance on a verbal IQ test.

Oddly enough, I haven’t yet found a reference using deep nets on Raven’s progressive matrices. This seems like a very obvious application for deep networks as Raven’s matrices are small, high-contrast images and successful solution requires extraction of multi-level hierarchies of features. I expect that DNNs should very soon blow humans out of water in this test.

Raven’s matrices are the go to test for human intelligence with g-loading around 0.8 and virtually no cultural bias. Such an experiment would likely show, that the nets to achieve IQ 200+ in a very vivid illustration of the relationship between proxies for g and the actual “general intelligence” – the holy grail of artificial general intelligence (AGI) research.

Here is then a nice summer project: put together a DNN for solving Raven’s matrices. I even recall a paper on machine generation of test examples so enough training data will not be a problem!

Deep nets and Raven’s progressive matrices are made for each other.

Machine psychotherapy, creativity and aesthetics

On a joking note – if there is machine psychology, could there be also machine psychotherapy? How could a venerable Freudian help his DNN clients?

There are some very playful examples done with generative models (based on recurrent deep networks), e.g. text generation à la Shakespear/Graham/Wikipedia. A machine therapist will definitely be able to use their good old tools of word association games and automatic writing to diagnose whatever will be the digital equivalent of Oedipus complex of his machine patients.

b0a94d76a05bd77491c963328e255554

Did you again dream about electric sheep Mr. Android?

Even the good old cliché of dream interpretation can be brought out of retirement.
Geoffrey Hinton spoke about machine dreams long time ago. And the psychologist are already picking up on this:

One of the areas that I’ve been looking at recently is machine dreaming, the question whether AI systems are already dreaming. There’s little question that they meet our criteria for what a dream is, they meet all our definitional criteria. There’s better evidence really that machines, AI systems, are dreaming, than there is that animals are dreaming that are not human.

— Associate Professor of Psychology, James Pagel on the “All in the Mind” podcast.

The excellent paper by Google researchers, Inceptionism: Going Deeper into Neural Networks,  shows beautiful demonstrations of DNN fantasies, dreams and pareidolia. The psychology of digital psychedelic experience is close too.

What deep neural nets dream about actually.

What deep neural nets dream about actually.

This section is of course tongue in cheek, but its aim is to illustrate, that already now, the state-of-the-art DNNs can achieve very rich “mental” states.

Sidenote: speaking of machine therapy, the other way around, i.e. machines being therapist to humans, is a promising researched field. Indeed they seem to come a long way since the command line therapist and the `M-x doctor` (for the Emacs fans out there).

Machine ethology. Machine sociology. Machine etiquette. Machine politics.

Machines are already talking to each other a great deal: think of the internet, communication networks, or the budding world of internet of things. For now, the conversation is only between agents of low sophistication using simple, rigid protocols. We could perhaps already talk about machine ethology, maybe even nascent sociology. TCP/IP is an example of simple machine etiquette.

But the real deal will come when the artificial agents get more sophisticated (i.e. DNNs) and their communication bandwidth increases.

The final step is achieved, when the agents start to create mental self-models and also models of the other agents they are communicating with. The gates of social psychology, sociology and politics will be then pried wide open for our machine comrades.

Future of hard science is soft science?

Will your AI team soon have to hire a machine psychologist? Maybe so.
It is fascinating, that the hardest of hard fields – mathematics/statistics/AI research/software engineering in the areas of AI converges on methods from soft science.

Soft-sciences, mind you, not humanities.