🌏 Project URLβ†—
https://amorphous-body-docs.e-kezia.com
🐈 Githubβ†—
https://github.com/ekkezia/midi-party-visualizer/tree/main/midi-graph
πŸ‘€ Overview

A documentation site for the continuous captures taken at the Amorphous Body party, commissioned for Asian Avant-Garde Film Festival (2024) at M+ Museum, Hong Kong.

πŸ“š Tech Stack
Processing (Java)
Processing Video Library
OscP5
Next.js
Tailwind CSS
Supabase
πŸ–οΈ Description

MIDI-Graph visualizes live sound input as a continuously evolving field of images, where each camera-captured frame becomes a β€œnote” on a pitch-amplitude grid. The system is inspired by the interface logic of MIDI editors in digital audio workstations β€” yet instead of rendering simple rectangular notes, it substitutes them with real-time video captures, transforming sound into a visual memory of the moment it was heard.

midigraph

Horizontal scrolling on a super long capture of a 3-hour event



βš™οΈ Technical Overview

Developed in Processing, the program integrates OSC (Open Sound Control) input and live video capture:

- Pitch Detection β†’ y-axis: Incoming MIDI pitch values (ranging between 36–83) are mapped vertically, determining the y-position of each captured video frame.

- Amplitude β†’ x-axis: The loudness of the signal (amplitude) drives the x-position, simulating the duration and placement of a MIDI note on a timeline.

- Image Generation:
The Processing Video library continuously captures frames from a webcam. Each image is resized and composited within a dynamic MIDI grid, replacing conventional note rectangles with real, time-bound textures.

- OSC Integration: Pitch and amplitude data are streamed via OscP5 and NetP5, allowing external audio software or devices to control the system.

- Temporal Logic: The canvas is refreshed once the x-axis is filled, similar to how a sequencer loops over time β€” producing rhythmic, recursive layers of visualized sound.

The project subverts the sterile abstraction of digital sound interfaces by reintroducing the body and environment into the signal. Instead of MIDI notes as synthetic representations, the system records the real scene of their occurrence β€” embedding human gestures, lighting, and atmosphere into the score.

Each sound becomes both audible and visible, situating musical data in lived space. The output is a generative collage where sound, movement, and vision loop together β€” an indexical sequencer that plays time itself.

panic-library-midigraph

Captures of "Panic Library"

πŸ“ Notes

Special thanks to Aisha Causing, Michael, and Eunice Tsang

Elizabeth Kezia Widjaja Β© 2025 πŸ™‚