My work in electronic music and instrument building often involves aspects of writing – or hacking apart – computer code, and programming small embedded microprocessors. This kinetic sculpture is the result of experiments in controlling robotic movement and deep learning algorithms in computer vision.
The piece uses a camera coupled to a computer vision algorithm that identifies the most conspicuous, attention grabbing element in the visual field (Itti et al., IEEE PAMI, 1998). Camera input is converted into five parallel streams according to color, intensity, motion, orientation, and flicker, which are visible at the bottom left of the screen. These streams are then weighted and fed into a neural network to generate a saliency map, visible at the right-side of the screen. The robot arm is then programmed to move towards the most salient object, identified by the green ring on the screen on the left
This simple coupling – computer vision and movement – generates eerily life-like behaviors that are often delightfully unpredictable.
below is some documentation of early prototypes