Marina Ananias

BA in Computer Science and Music, Concentration in Neuroscience

My work investigates how computational musical systems can support expressive agency when human input is tentative, ambiguous, or still forming, through the design of constrained interfaces, adaptive feedback, and perceptually grounded interaction.
Project I

MIDI Songwriter

A constrained hardware musical interface for exploratory harmonic creation

How can a constrained, non-symbolic musical interface preserve authorship for users whose musical intent is still forming?

Goal: Investigate whether embedding harmonic structure directly into an instrument's control logic can support exploratory music-making without requiring symbolic music-theoretic knowledge.

How it was built: MIDI-Songwriter is a microcontroller-based hardware instrument that outputs MIDI-over-USB to external sound engines. Interaction is driven by discrete physical buttons and visual feedback via RGB LEDs. Internally, the system is organized into distinct harmonic “pages,” each corresponding to a stylistic harmonic vocabulary (e.g., pop, blues, rock). Each page defines a bounded harmonic state space, restricting chord choices to a stylistically coherent grammar.

Implementation: Designed and fabricated a custom enclosure and tactile interface, and implemented embedded firmware in C++ on Arduino. The firmware manages real-time input processing, harmonic state transitions, MIDI event generation, looping functionality, and LED feedback. By encoding harmonic constraints directly into the system's state logic, the instrument ensures immediate musical coherence without exposing chord labels or theoretical abstractions to the user.

C++ Arduino Analog Sensors MIDI-over-USB Hardware & Firmware Design
MIDI Songwriter assembled device glowing blue on a desk
Final assembled hardware prototype
Laser-cut enclosure layout for the MIDI Songwriter
Laser-cut enclosure design
Internal wiring diagram of the MIDI Songwriter created in Fritzing
Wiring diagram connecting sensors, LEDs, and the microcontroller

Challenge: The central challenge was balancing expressive freedom with harmonic constraint. Over-restricting the system risked reducing the user's sense of authorship, while under-restricting it led to harmonically incoherent output that increased cognitive load for novice users. Designing interaction that preserved agency while preventing musically invalid states required careful structuring of harmonic vocabularies and state transitions.

Improvement: Future iterations could investigate how users form internal mental models of constrained harmonic systems over time. Varying the visibility or flexibility of harmonic structure, or introducing adaptive constraint boundaries, could further clarify how different levels of constraint affect perceived authorship, learning, and long-term creative engagement.

Project II

Brainwaves to Sound

Rule-based sonification interface for exploratory interpretation of EEG data

How do listeners attribute meaning and agency when sound is generated from ambiguous biosignals without an explicit mapping?

Exploratory demo of EEG features mapped to study how listeners attribute meaning under ambiguous input

Goal: Rather than framing the system as neurofeedback or physiological decoding, investigate how listeners engage with biosignal-driven sound when interaction is limited to parameter selection and listening.

Python EEG Processing Timbre/Harmony Mapping



How it was built: Brainwaves to Sound is a browser-based system built in React and TypeScript that converts uploaded EEG data into short generative music outputs. Users upload EEG data files (CSV or EDF) or select a preset (e.g., “calm,” “focus”), and the system generates a fixed-duration audio segment rendered as WAV, with optional MIDI export. Internally, EEG data is processed by a rule-based audio generation engine that extracts frequency-band information and maps these values to musical parameters such as pitch range, rhythmic density, and velocity. Different presets configure distinct mapping behaviors, shaping the musical output without exposing the underlying signal-to-sound relationships to the user. The interface includes real-time waveform visualization, transport controls for playback, and a piano-roll style visualization of the generated sequence. Audio generation is performed asynchronously, and the resulting sound is presented as a complete artifact rather than a continuously interactive process.

Challenge: A central challenge was designing a system that remained perceptually coherent and musically engaging despite limited user agency and non-interactive signal mapping. Because users cannot directly influence the sound in real time, the system risks being interpreted either as arbitrary or as falsely deterministic. Balancing rule-based musical structure with opaque mappings required careful constraint design to avoid producing output that felt either random or misleadingly authoritative. Additionally, aligning the interface language with the system's actual capabilities was important to avoid over-claiming real-time processing or physiological interpretation.

Improvement: Future iterations could introduce controlled interactivity, allowing users to influence specific musical parameters or selectively expose aspects of the mapping logic. Comparing fixed, non-interactive generation with partially controllable feedback would enable clearer investigation of how agency, authorship, and interpretability emerge in biosignal-driven musical systems.