Possibiliy of an injection attack on self-driving deep neural networks

Image processing deep neural networks can be catastrophically confounded by imperceptibly small perturbations in the input image, as demonstrated by Szegedy et al. 2013.

Nguyen et al. 2014 used genetic algorithms to purposely evolve abstract images that well trained neural networks confounded with real objects.

guitar_classification

Nguyen et al. 2014: Image evolved so that a neural network miss-classifies it as a guitar. “Swerve left” command could in principle be evolved in a similar way.

 

Using these techniques it could in principle be possible to construct artificial images (or video sequences) which when injected into the visual field of a self driving car could cause unwanted, possibly dangerous behavior (such as sudden swerve into opposing traffic).

It is theoretically possible (but likely practically very hard), to create adversarial images that would have the same catastrophic effect, even if covering only part of the visual field of the car (e.g. by holding up a printout of such an image at roadside).

Speaking of injections – an older, “fun” idea are SQL injections on licence plates as a way to mess with automated traffic surveillance systems (plate gets OCR-ed, and written into the database – which possibly triggers a drop table if unguarded). This is a special case of injection attacks – the adversarial data payload is a code snippet (a so called “code injection”).

licenceplatecamerasqlinjection

A “Licence plate” with an SQL injection attack as a way to fight back traffic cameras.

(I discuss “psychology” of deep learning networks also here).