Projects 10 min read

CANTUS RAVE: an 800-year-old canon walks into a rave

A 3:48 napkin films demoscene cracktro. Pärt's mensuration canon as the rhythmic skeleton, Beethoven's Für Elise as the lead, recast at 140 BPM as festival DnB. Eight sections, sample-accurate sync between every kick / snare / bell / lead / chromatic / brass-fanfare event in the score and the visuals on screen. Plasma, copper bars, vector tunnel, kaleidoscope, mode-7 grid floor, mandala explosion. Key change up a perfect fourth at the climax with a 360° camera rotation. Plan 9 bunny crew dances through. Eight production passes.

CANTUS RAVE: an 800-year-old canon walks into a rave

CANTUS RAVE: an 800-year-old canon walks into a rave

I had been trying to write a four-minute rave that didn't wander out of key by minute two. Free composition kept producing the same failure: the second half lost the plot. The fix turned out to be older than the genre. Borrow.

CANTUS RAVE is a 3:48 demoscene cracktro built on Arvo Pärt's mensuration canon as the rhythmic skeleton — D–A–Bm–F♯m–G–D–G–A, four cycles, the same piece of public-domain craft that has held music together for forty years — with Beethoven's Für Elise transposed over it as the lead melody. Eight sections at 140 BPM. Quiet, build, drop, breakdown, drop, climax with a key change up a perfect fourth and a brass fanfare, bell tail.

Every kick, snare, bell toll, lead note, chromatic accidental, and brass blast in the score has a precomputed visual response in the animation. Sample-accurate sync down to ±1 frame at 12 fps.

Watch CANTUS RAVE on YouTube. The engine is open source (GPL-3.0-or-later). The film is Creative Commons (CC BY-NC 4.0).

The shape

Through-composed at BPM 140. Eight sections of about 27 seconds each. Total runtime 3:48 (2,660 frames at 12 fps), plus optional NAPKIN FILMS bookend cards (intro: ascending A-minor bell sting; outro: descending A-minor resolve).

Time Section What it is
0:04 S1 — Stillness Plasma + starfield. The canon arrives with no drums.
0:31 S2 — Bunny enters Solo Plan 9 bunny, kick pattern hints.
0:59 S3 — Build Copper bars, spectrum analyzer, distant wireframe.
1:26 S4 — DROP ONE Tunnel, flow field, bunny crew, kick rings.
1:54 S5 — Breakdown Wireframe solo. The canon walks alone.
2:21 S6 — DROP TWO Kaleidoscope, fire, mode-7 grid floor.
2:49 S7 — CLIMAX Key change up a perfect fourth, mandala explosion, brass fanfare, 360° camera rotation.
3:16 S8 — Bell tail Desaturating fade. The bunny walks away.

The film does not pretend the canon is anything but what it is — eight bars of borrowed motion. It says: that motion is enough to support a rave, if you let the variation, the asymmetric phrasing, and a key change at the right moment carry the rest.

The thesis

I have written a few music videos at this point. The ones that worked carried a structural gift from somewhere older than electronic music. Ten Thousand Days leaned on Pachelbel's Canon. Carrier Wave used a 10-channel deliberate-pan composition with sidechain pump and stacked drops to support the cosmic anthem. Cantus Rave is the first one where I let the borrowed structure be the literal foundation — Pärt's canon underneath, Beethoven on top, and the EDM machinery (drums, supersaw bass, brass fanfare, sound waterfalls, mandala explosions) sitting on top of those.

What I want to say with this is small: when free composition wanders, lean on the canon. Public-domain masterpieces beat custom melodies almost every time. The first chair lesson — backgrounds and foundations aren't a song; a song needs a star you can hum. The custom hook felt generic. Für Elise locked it in immediately.

The score — multi-channel ChipForge with HF-only character processing

The score is a multi-channel ChipForge composition. The thing I learned this round is about where character lives on the master bus. The honest answer turned out to be: nowhere.

Distortion, amp_sim, phaser, flanger, and tape sat all sound great per-channel. On the master bus, even at vanishingly small mix levels (drive=0.03, mix=0.02), they leak audible grit when stacked, and once the loudness comp pushes +3 dB the floor becomes audible. The user (me, listening on monitors) heard it instantly. The fix:

Split the bus at 250 Hz. Apply character to the high band only. Sum.

hp_bands = [EQBand(freq_hz=250.0, gain_db=0.0, q=0.707, band_type="highpass")]
lp_bands = [EQBand(freq_hz=250.0, gain_db=0.0, q=0.707, band_type="lowpass")]
low_band = apply_parametric_eq(audio, lp_bands)
high_band = apply_parametric_eq(audio, hp_bands)
# character effects on high_band only
audio = low_band + high_band

The sub stays clean. The grit lives where it belongs — above the kick fundamental, below the cymbal sparkle, in the band that actually wants character.

A few other lessons that stayed:

Loudness without auto_master. The auto_master(genre="edm") helper is a black box that pushes the noise floor up by 8-10 dB to chase a target LUFS. The fix that worked: explicit apply_compressor(threshold_db=-14, ratio=3.0, makeup_db=3.0) before a HOT MasterBusConfig(limiter_ceiling_db=-0.1). Loudness is a craft choice, not an AI button.

A1 (27.5 Hz) is below most consumer headphones and generates subharmonic distortion that sounds like static. Default to A2 (55 Hz) or higher for sustained sub drones; reserve A1 for transient hits only.

Pan motion as a flavor, not a constant. Spatial automation that sweeps L↔R faster than ~0.3 Hz over long durations (especially with depth >0.5) made me physically dizzy when reviewing. Cap motion rates at 0.3 Hz. Halve depths in the outro.

Note-perfect visual sync

Every audio event has a precomputed visual response. The render pipeline builds five sets — KICK_FRAMES, SNARE_FRAMES, LEAD_EVENTS, CHROMATIC_FRAMES, BRASS_FANFARE_FRAMES — keyed off the audio score's exact patterns. Each frame's render does O(1) lookups to know what's firing.

SCENE_FRAME_RATE = 12
STEP_TO_FRAME = 60.0 / BPM / 4 * FPS    # 1.286 frames per 16th-note step

def step_to_frame(s: int) -> int:
    return int(round(s * STEP_TO_FRAME))

KICK_FRAMES = set()
for sec_idx in SECTIONS_WITH_KICK:
    for bar in range(BARS_PER_SECTION):
        for step in KICK_PATTERN:        # [0, 10] — DnB skip-kick
            KICK_FRAMES.add(step_to_frame(...))

Then in the render loop, kicks shake the camera, snares do a 1.04× zoom punch, chromatic notes (Eb5, G♯4, G♯5) trigger chromatic aberration, the climax key change at the start of section 7 fires a single full-frame inversion, and the brass fanfare at the end of section 7 rotates the camera through a full 360° while a geometric mandala explodes from the center of the screen.

This is not "music-reactive animation" in the usual sense — there is no analysis pass on the rendered audio. The visuals know what the score knows because both files share the same step → frame math.

Section recipe pattern

Per-section "active effects" sets — get_recipe(section) — make the visual arc legible in eight lines of code:

def get_recipe(section: int) -> set[str]:
    if section == 0: return {"plasma_subtle", "starfield_low"}
    if section == 1: return {"plasma_subtle", "starfield_low", "bunny_solo"}
    if section == 2: return {"plasma", "starfield", "copper_bars", "spectrum",
                              "wireframe_distant", "bunny_solo"}
    if section == 3: return {"plasma", "starfield", "tunnel", "wireframe",
                              "flow_field", "spectrum", "bunny_crew", "kick_rings"}
    # ...

Each frame's render checks the recipe with set membership. Adding or removing an effect from a section is a one-line edit. The same pattern would have saved time on Carrier Wave's 10-act per-frame logic. Adopting it for all multi-section films from here on.

The STAGE flag for fast iteration

STAGE = os.environ.get("CANTUS_STAGE", "skeleton") lets me render the skeleton (drums + sub + bell + V7 + V1 only) in ~30 seconds for fast feedback, then STAGE=full for the complete chain. Keeping this for every score from here.

The dance module

characters/dance.py exists now — a small library of named clips (MOONWALK, BREAKDANCE_FREEZE, BALLET_PIROUETTE, MJ_LEAN, DISCO, CHEER) with eased keyframe interpolation per body part. The Plan 9 bunny crew uses it: each bunny gets a different clip, staggered start frames so they don't move in lockstep, and a beat bounce on every kick layered on top of the dance. The eased interpolation reads as "dance" instead of "stiff lerps between two poses." Linear lerp_pose is fine for beat-driven jumps; for a crowd dancing at 140 BPM through a four-minute build-and-drop arc, easing is the unlock.

Eight production passes

The film took eight passes to ship — within budget for a song-forward music video with a multi-channel ChipForge score (per ADR-006 the realistic budget for this category is 8-12).

  • v1 — establishing the structure. Pärt canon at 140 BPM, custom melody hook on lead. Felt monotonous.
  • v2-v3 — the first chair fix. Replaced custom hook with Für Elise transposed over the canon's chord changes. Locked it in.
  • v4-v5 — the static fix. Diagnosed the grit on the master bus. Implemented the 250 Hz band split for HF-only character processing.
  • v6 — variation in the fastest voice. V1 cycled four distinct patterns per section instead of one. Asymmetric phrasing — 7+1 bar phrases break the 4-bar grid.
  • v7 — climax key change up a perfect fourth, brass fanfare, mandala explosion, 360° rotation, full visual + audio buildup.
  • v8 — bell tail desaturation, pan-motion cap, NAPKIN FILMS bookend cards, final mix.

make_film.py --stages mix keeps each balance pass at ~6 seconds, so the count was tractable.

The stack

  • Animation: Python + PIL + numpy stick figures + dimensional overlay layers, 854×480 at 12 fps, 2,660 frames + bookend cards
  • Score: ChipForge numpy synthesis, no samples — multi-channel composition with sidechain pump, Skrillex-style wobble, supersaw bass, brass fanfare, bell harmonics, three sound-waterfall cascades, BPM 140
  • Mastering: 250 Hz HF/LF band split with character FX on the high band only, explicit -14 dB / 3:1 compressor + -0.1 dB limiter ceiling
  • Bookend: PIL-rendered cards (demoscene cracktro palette) + ffmpeg concat, A-minor bell stings ascending into the film and descending out
  • Direction: Claude Code, Opus 4.7, agent mode

No GPU. No stock footage. No licensed instrument samples.

The note about borrowing

CANTUS RAVE is the cleanest expression I've made of a small thesis: borrow from the canon when you don't yet know how to compose what you need. It is not a humility move. It is a craft move. Pärt knew how to write a canon that holds for forty minutes. Beethoven knew how to write a melody you remember after one listen. They did the work. Borrowing means letting their work hold while you do yours — the recasting, the genre transform, the visual sync, the character processing, the climax-key-change choreography that makes the borrowed pieces yours by the end.

The audit linter that gates every score now (chipforge-ai/scripts/audit_score.py) is the same idea applied to engine usage. Every existing score before this one was using ~15-20% of the engine's capability — not because of musical choices, but because nobody had been told the rest existed. CANTUS RAVE hit 100% completeness because the linter was the gate. Borrow first; let the gate enforce; recast freely.

Credits

Composed, animated, and produced by Joshua Ayson in collaboration with AI. Made by Organic Arts LLC, Nevada.

Foundation: Arvo Pärt's mensuration canon (chord progression). Lead: Beethoven's Für Elise (transposed). Both compositions are in the public domain.

Related work

CANTUS RAVE sits in the napkinfilms music-video line — closer to Ten Thousand Days (Pachelbel-foundation classical EDM) and Arp Cathedral (cathedral arpeggio meditation) than to the rap line.

For the architecture, see Four Films From Code on why constraint is the feature.

License. CANTUS RAVE is licensed under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0). Share and adapt with attribution to "Organic Arts LLC" and a link to the original, non-commercial use only. Engine code is GPL-3.0-or-later. Pärt and Beethoven source compositions are in the public domain. Contact: j@organicartsllc.com


Produced with Napkin Films and ChipForge, tools built by Joshua Ayson and AI agents at Organic Arts LLC.