NeuroVision is a web-based sandbox for shader-based video processing. You can use it to transform video streams from mobile devices, webcams and video platforms into generative works of art. We will take you on a field trip and show you how to capture urban motion, colors, flows and rhythms on video. You will learn how to process videos with Neural Networks and use the NeuroVision Sandbox as an artistic tool for your own video-recordings.
You will use the OpenGL Shading Language inside the NeuroVision Sandbox. No special skills are required, but some coding literacy is recommended. For more information: http://www.perceptify.com/neurovision/
Ursula Damm is media artist and professor for Media Environments at the Bauhaus University in Weimar, where she is also involved in establishing the Digital Bauhaus Lab. She envisioned the Neurovision Sandbox and uses it as part of her artistic process.
Martin Schneider is a freelancer with a background in media technology and cognitive science.
He works at Bitcraft Lab at the intersection of science, craft and computation. In collaboration with Ursula Damm he created the Neurovision Sandbox as a tool for generative video processing.