Automatic for the People: Crowd-driven Generative Scores Using Machine Vision and Manhattan release_wyoxtdum6vhyllwrznnjwhrpki

by Chris Nash

Published by Zenodo.

2021  

Abstract

"This paper details a workshop and optional public installation based on the development of situational scores that combine music notation, AI, and code to create dynamic interactive art driven by the realtime movements of objects and people in a live scene, such as crowds on a public concourse. The approach presented here uses machine vision to process a video feed from a scene, from which detected objects and people are input to the Manhattan digital music notation, which integrates music editing and programming practices to support the creation of sophisticated musical scores that combine static, algorithmic, or reactive musical parts.
In text/plain format

Archived Files and Locations

application/pdf  4.8 MB
file_3luwq5aq2zfkzbzlml2e5wcjky
zenodo.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article-journal
Stage   published
Date   2021-05-13
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: d0498366-3d85-41a0-b5d3-f2538f5f1fee
API URL: JSON