past feast projects
The project provides sounds and color analogs of people who exist across multiple axes of physical, emotional, cultural and social divergent identities. The project considers imputations of combinations of sound, color, and visuals in presenting all data - including potentially very identifiable combinations.
1001++ (Magical Technologies) is a series of artistic inquiries inspired by Arthur C. Clarke’s 3rd law, “[a]ny sufficiently advanced technology is indistinguishable from magic.” This lens allows students on this UARTS FEAST project to re-examine folk narratives not as superstitious but to read the ‘magic’ as culturally aspirational desires of applied technologies (e.g., VR, machine learning, robotics, storytelling, choreography).
This UARTS FEAST project, "Picturing the Structure of Musical Spaces," will study and construct visual representations of music using mathematics. Drawing on scholarship that represents musical chords as points in geometric spaces, we will explore new ways of “picturing” these musical spaces by constructing visualizations of their structure, patterns, and symmetries.
To create a healthier and more sustainable future, we need to create ways to make organic food less expensive to produce and thus more accessible to everyone. This UARTS FEAST project will explore ways to use robots to assist with organic farming and gardening tasks and will collaborate with the U-M Campus Farm and feature working with a real robot to perform tasks related to agriculture.
Sonic Scenographies is a research program catalyzing experimental collaboration at the intersection of performance, music, theater, dance, architecture, information science, engineering, and digital space. Participating students will experiment with XR tools and gaming engines in Taubman College's new XR lab, and work to develop a virtual platform which interrogates the digital sphere's impact on live performance and audience participation.
The team will explore how pervasive technologies are mediating the way people interact with their cities. The project seeks to make visible and transparent the complex yet critical issues around the use of computer vision and artificial intelligence (as in controversial programs like Detroit’s Project Greenlight and New York’s LinkNYC systems) in public and urban spaces as we build citizen-engaged, physical installations and interventions.
This team is developing an interactive sound installation that helps users learn the basics of coding. Utilizing research on embodied engagement with sound and critical improvisation studies, this installation will facilitate real-time audio feedback for users’ physical interactions with it. The code that facilitates these interactions will then be displayed, helping users understand the interactive potentials of coding.
This team will receive the anatomical model, then print, patent and market a trio of 3-D polymer objects, representing the already designed Lung/Diaphragm simulator, then print a polymer tongue, and print a voice box/vocal folds simulator. Polymer objects will be reprinted affordably, made available in a "toolbox" style setting for housing the anatomically correct parts, and made available for purchase for artists, academics and physicians.
Following the inspiration of the meteorology community and Weather Underground that connected backyard weather stations into the global weather system, thistudent team will deploy magnetometers and other sensors everywhere to make a dense distributed array to enable new science and understanding of the Earth’s space environment.
This student team will work on a new edition of Telemann’s chorale book of his 430 chorales. This will involve developing score recognition technologies to automatically transcribe the 1730 edition into machine code, and a computational model that can generate the four parts of these chorales from that code.
Working with doctors at the Mayo Clinic Center for Sleep Medicine, the student team will explore the possibilities of creating techno tracks from up to, at least, four data points from raw polysomnogram data (EEG/Pulse/Oxygenation). The goal is to convert sleep data into interesting music to enable sleep diagnostics that would be accurate and fun–for the world.