BEING IN REAL-TIME (or: SEEING LIKE A MACHINE / A MACHINE SEEING LIKE ME)
 


BEING IN REAL-TIME (or: SEEING LIKE A MACHINE / A MACHINE SEEING LIKE ME) is a an experimental research project about space, memory, and machine learning. Made in collaboration with Bell Labs Experiments in Art and Technology Program, with creative technologists Ethan Edwards and Danielle McPhatter, with support from rhizome.org.

The experiment involves working with a prototype device developed by Bell Labs called an eyebud (a wifi-connected wearable camera/headphone/speaker) as a prosthetic memory: anchoring data (speech-to-text) to objects that are able to be 'recognized' by the eyebud via a machine learning model modified with a small set of data.

Both the training data and speech-to-text are generated spontaneously in real-time.During this experiment, I wore the eyebud first to train the model to ‘see’ - classify the objects in my apartment in an idiosynchratic way. After the model is been trained, when the eyebud recognizes a significant object, I am prompted to speak. The next time I wear the eyebud and an object is recognized, the last text is read out, and I add to it.

This video was a live performance, experimentally presenting and demonstrating the research and concept.



The work was also presented as part of a 2023 group show:

The Interface Between Humanity and the Universe v3.0 at 2023 Intermedia Art Festival of SIMA, CAA (School of Intermedia Art, China Academy of Art).