Abstract:This paper outlines how to leverage the Web MIDI API and web technologies to convert numerical data in JavaScript to Most Significant Byte and Least Significant Byte combos, stage the data as dual concurrent CC messages, use WebSockets to send it to multiple endpoints, and wire the browser to other music software. This method allows users to control their own native application via 14-bit MIDI messaging and even applications housed on a remote source. Because the technology utilizes WebSockets, it is not reliant on local networks for connectivity and opens the possibilities of remote software control and collaboration anywhere in the world. While no shortage of options exists for controlling music software from the web, the Web MIDI API allows for a more streamlined end user experience as it seamlessly links to core OS MIDI functionality. The paper will share a use case of transmitting high-resolution MIDI through the browser and translating it to control voltage data for use with a modular synthesizer.
Abstract:Analog-digital hybrid electronic music systems once existed out of necessity in order to facilitate a flexible work environment for the creation of live computer music. As computational power increased with the development of faster microprocessors, the need for digital functionality with analog sound production decreased, with the computer becoming more capable of handling both tasks. Given the exclusivity of these systems and the relatively short time they were in use, the possibilities of such systems were hardly explored. The work of Jos\'e Vicente Asuar best demonstrated a push for accessibility of such systems, but he never received the support of any institution in order to bring his machine widespread attention. Modeled after his approach, using a Commodore 64 (or freely available OS emulator) and analog modular hardware, this paper aims to fashion a system that is accessible, affordable, easy to use, educational, and musically rich in nature.
Abstract:pyAMPACT (Python-based Automatic Music Performance Analysis and Comparison Toolkit) links symbolic and audio music representations to facilitate score-informed estimation of performance data in audio as well as general linking of symbolic and audio music representations with a variety of annotations. pyAMPACT can read a range of symbolic formats and can output note-linked audio descriptors/performance data into MEI-formatted files. The audio analysis uses score alignment to calculate time-frequency regions of importance for each note in the symbolic representation from which to estimate a range of parameters. These include tuning-, dynamics-, and timbre-related performance descriptors, with timing-related information available from the score alignment. Beyond performance data estimation, pyAMPACT also facilitates multi-modal investigations through its infrastructure for linking symbolic representations and annotations to audio.