BEING IN REAL-TIME (or: SEEING LIKE A MACHINE / A MACHINE SEEING LIKE ME)
 


https://vimeo.com/524396754/3057e52ec6

BEING IN REAL-TIME (or: SEEING LIKE A MACHINE / A MACHINE SEEING LIKE ME) is a work-in-progress by artist Sarah Rothberg and Bell Labs creative technologists Ethan Edwards and Danielle Mcphatter. They are using a prototype device called an eyebud (a wifi-connected wearable camera/headphone/speaker) to act as a prosthetic memory: anchoring data (speech-to-text) to objects that are able to be 'recognized' by the eyebud via a machine learning model modified with a small set of data. Both the training data and speech-to-text are generated spontaneously in real-time.

The first planned use of this pipeline is a durational performance and resulting interactive experience: Rothberg wears the eyebud first to train the model to 'see.' After the model has been trained, when the eyebud recognizes a significant object, Rothberg is prompted to speak. The next time Rothberg wears the eyebud, when that object is recognized, the last text is read out, and Rothberg adds to it. At the end of the performance, anyone can wear the eyebud and hear the text played back when the eyebud 'sees' the objects.