BibTeX
CSL-JSON
MLA
Harvard
Automatic for the People: Crowd-driven Generative Scores Using Machine Vision and Manhattan
release_wyoxtdum6vhyllwrznnjwhrpki
by
Chris Nash
Published
by Zenodo.
2021
Abstract
"This paper details a workshop and optional public installation based on the development of situational scores that combine music notation, AI, and code to create dynamic interactive art driven by the realtime movements of objects and people in a live scene, such as crowds on a public concourse. The approach presented here uses machine vision to process a video feed from a scene, from which detected objects and people are input to the Manhattan digital music notation, which integrates music editing and programming practices to support the creation of sophisticated musical scores that combine static, algorithmic, or reactive musical parts.
In text/plain
format
Archived Files and Locations
application/pdf 4.8 MB
file_3luwq5aq2zfkzbzlml2e5wcjky
|
zenodo.org (repository) web.archive.org (webarchive) |
Read Archived PDF
Preserved and Accessible
Type
Stage
Date 2021-05-13
article-journal
Stage
published
Date 2021-05-13
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
access all versions, variants, and formats of this works (eg, pre-prints)
Cite This
Lookup Links
oaDOI/unpaywall (OA fulltext)
Datacite Metadata (via API)
Worldcat
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar
Datacite Metadata (via API)
Worldcat
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar