faculty engineering/arts student teams

Applications due October 18. Learn more about how to apply HERE.

The project team will work collaboratively on a new multi-media artwork produced through printmaking, animation, and storytelling. The project seeks to visually stretch the boundaries of the analog and digital realms of art-making into a multi-media experience.
Following the inspiration of the meteorology community and Weather Underground that connected backyard weather stations into the global weather system, thistudent team will deploy magnetometers and other sensors everywhere to make a dense distributed array to enable new science and understanding of the Earth’s space environment.
The student team will explore current participatory design theory and practices toward ideation/ fabrication/production, and test developed pieces that will move forward our understanding and application of participatory design.
This team will enable the architecture student to translate and test spatial ideas in the design process through immersive technologies using point clouds generated from photogrammetry and LiDAR. In addition to scanning and photogrammetry, this team will test design methodologies (experimenting with VFX and VR), create templates for workflow documentation, and establish a database for site scans and student projects.
The Designing Generative Justice project develops technologies for empowering grassroots production by artisans in Detroit. The student team will actively engage with Detroit artisans: helping to develop prototypes that meet their needs; test the developed technologies; and modify them based on user feedback.
The project is called LuCelegans (Luce: light in latin; Light-up C. elegans), or the Interactive Worm Project. It is about building the first interactive, physical, 3-dimensional prototype of C. elegans nervous system through the efforts of a student research team.
This student team will work on a new edition of Telemann’s chorale book of his 430 chorales. This will involve developing score recognition technologies to automatically transcribe the 1730 edition into machine code, and a computational model that can generate the four parts of these chorales from that code.
The research project team will create physically and socially intelligent structures that facilitate cooperation and emotional release, while transcending the fixed expectations of architecture and infrastructure, thereby emboldening viewers to become participants.
The student team will develop better construction, testing, and shipping methods; create survey instruments and data collection strategies; and develop and test marketing materials for the ceramic filters.
The goal of this project is to explore methods of incorporating visual communication of effort, gesture, and movement into telematic performance without video transmission. Practical experiments with different sensing techniques, including infrared motion capture, inertial measurement, electromyography, and force sensing will be coupled with novel digitally fabricated mechatronic displays.
This team will make Korean Art Song (Gagok) more accessible to English speaking students by finding Korean composed song scores, creating English translations, phoneticizations and spoken recordings of song texts, and organizing these materials into an accessible database.
This highly experiential, collaborative, and transformational working group will agitate for new considerations about ways that communities might thrive, survive and re-imagine creativity in precarious times. Examine how, and if, storytelling, design, and movement-based practices can be employed to bridge technological and artistic approaches in the alternative world defined by a coronavirus future.
The student team will be tasked with developing and evaluating task-specific programming language prototypes for use in integrating computing into high school and undergraduate classes.
Working with doctors at the Mayo Clinic Center for Sleep Medicine, the student team will explore the possibilities of creating techno tracks from up to, at least, four data points from raw polysomnogram data (EEG/Pulse/Oxygenation). The goal is to convert sleep data into interesting music to enable sleep diagnostics that would be accurate and fun–for the world.