BEING IN REAL-TIME (or: SEEING LIKE A MACHINE / A MACHINE SEEING LIKE ME)
 


BEING IN REAL-TIME (or: SEEING LIKE A MACHINE / A MACHINE SEEING LIKE ME) is a an exploration in collaboration with Bell Labs Experiments in Art and Technology Program, with creative technologists Ethan Edwards and Danielle Mcphatter.

The expeirment involves using a prototype device called an eyebud (a wifi-connected wearable camera/headphone/speaker) as a prosthetic memory: anchoring data (speech-to-text) to objects that are able to be 'recognized' by the eyebud via a machine learning model modified with a small set of data. Both the training data and speech-to-text are generated spontaneously in real-time.

During this experiment, I wore the eyebud first to train the model to ‘see’ - classify the objects in my apartment in an idiosynchratic way. After the model is been trained, when the eyebud recognizes a significant object, I am prompted to speak. The next time I wear the eyebud and an object is recognized, the last text is read out, and I add to it.

A talk/performance about the process: https://vimeo.com/524396754/3057e52ec6, hosted by Rhizome.