Hi, I’m Braxton Clark, a music technology educator and audio programmer who loves making complex sound technology approachable for learners at all levels. I design hands-on, project-based curricula that demystify Max/MSP, TouchDesigner, and Python, helping students build tangible instruments and installations while fostering curiosity and creativity. My journalism background informs a storytelling approach to sound and media. I’ve led programs at the Fulton County Future Public Art Lab and Georgia Tech, founded Atlanta Music Technology to empower makers, and am currently pursuing an M.M. in Music Technology at NYU. I’m passionate about mentoring the next generation of creative technologists and builders.

Braxton Clark

PRO

Hi, I’m Braxton Clark, a music technology educator and audio programmer who loves making complex sound technology approachable for learners at all levels. I design hands-on, project-based curricula that demystify Max/MSP, TouchDesigner, and Python, helping students build tangible instruments and installations while fostering curiosity and creativity. My journalism background informs a storytelling approach to sound and media. I’ve led programs at the Fulton County Future Public Art Lab and Georgia Tech, founded Atlanta Music Technology to empower makers, and am currently pursuing an M.M. in Music Technology at NYU. I’m passionate about mentoring the next generation of creative technologists and builders.

Available to hire

Hi, I’m Braxton Clark, a music technology educator and audio programmer who loves making complex sound technology approachable for learners at all levels. I design hands-on, project-based curricula that demystify Max/MSP, TouchDesigner, and Python, helping students build tangible instruments and installations while fostering curiosity and creativity.

My journalism background informs a storytelling approach to sound and media. I’ve led programs at the Fulton County Future Public Art Lab and Georgia Tech, founded Atlanta Music Technology to empower makers, and am currently pursuing an M.M. in Music Technology at NYU. I’m passionate about mentoring the next generation of creative technologists and builders.

See more

Skills

Experience Level

Expert

Language

English
Fluent

Work Experience

Founder & Lead Instructor at Atlanta Music Technology
March 1, 2025 - Present
Founded educational organization dedicated to teaching audio programming, music hardware construction, and Max/MSP to students of all levels. Design and deliver group and private curricula covering synthesizer programming, audio-visual integration, and creative coding. Build community around accessible, hands-on music technology education in the Atlanta area.
Contract Instructor at Fulton County Future Public Art Lab
January 1, 2023 - Present
Teach group and private classes on creative technology including Max/MSP and TouchDesigner. Developed curriculum for programming custom synthesizers using Max/MSP. Integrated LiDAR camera systems into TouchDesigner using particle systems for immersive audio-visual installations. Delivered private instruction to Georgia Tech students through Art Lab partnership.
Sound Designer at Symphonic Distribution
January 1, 2021 - December 31, 2022
Created and managed production of sample packs for the Symphonic sample label. Recruited artists including umru, ghostsocial, and Ethereal for distribution partnerships. Developed music visualization software using TouchDesigner for promotional content. Edited video and audio using Final Cut Pro; managed social media accounts.
Production Intern at Symphonic Distribution
August 1, 2022 - August 1, 2023
Supported successful signing and rollout of sample pack products. Gained end-to-end experience in music product development and distribution pipeline.
Editor at Music Mondays
January 1, 2020 - Present
Promoted from staff writer to editor based on consistent content quality. Wrote weekly music reviews and artist interviews; stories garnered hundreds of thousands of page views. Helped grow platform audience to 4,000+ followers across social media channels.
Journalist at Stringer
January 1, 2024 - January 1, 2025
Produced professional AP-style news stories; conducted eyewitness interviews for breaking stories. Assisted in training an AI algorithm to detect journalistic bias. Collaborated with editorial team for online content distribution.

Education

Master of Music (M.M.) at New York University
January 1, 2026 - January 1, 2028

Qualifications

Add your qualifications or awards here.

Industry Experience

Media & Entertainment, Education, Professional Services, Software & Internet, Other
    uniE608 Disjunct
    This device was inspired by cubist perspective methods. The device takes frequency bands and spatializes them, so the high end of a sound might be in a different location than the mids or lows. The graphics are representative of the position of the bands in space with the listener in the center. explainervideo
    uniE608 Generative Spatial Synthesis
    Ambisonics is the process of simulating moving sound sources in a stereo field or mapping these different channels to real-life speakers for psychoacoustic effects. I used the ICST Ambisonics Max external for ambisonic panning to create a particle system that controls 40 unique FM voices in the stereo field. As visualized above, each sound source is updated roughly every .1 seconds, and the position of the sound source is changed in the stereo field. The FM voices are stochastically generated based on parameters that are instantiated differently for each voice. That means the tone moves independently in the stereo field, and the voices are not modulated as a whole. Then, the notes and timbres they represent in the ambiphonic system are panned and spatialized. In this example, I routed the output from the ambisonic FM system to my ambisonic grain delay, and the result is a highly dynamic and spatialized mix. The generative FM synth and the granular spatial delay include Head-Related Transfer Functions. These are frequency domain convolutions of impulse responses that emulate the slight delay of sound reaching each ear, our ear shape, and other psychoacoustic effects. HRTFs are utilized in spatial engines to create a real-sounding soundstage in headphones. In the spatial grain delay, Swarm, I’ve lowered the number of speakers to 25 to reduce computation load, and added a mix knob for blending ambisonic and binaural spatialization techniques, which works well as a quick way to allow club producers to optimize for stereo systems and for more ambient or home listening artists to optimize to headphones. video explainervideo