Jeremy Hyrkas is a music researcher focused on generative models for composition. He is particularly interested in models that directly create audio and methods for musicians to more easily use statistical models in their creative process. His research has been presented at ICMC and ISMIR and he has performed as part of the Computer Music Ensemble at NYU at the Interactive Performance Art Series. Jeremy holds computer science degrees from the University of Washington and Colorado State University, and previously worked at Microsoft and Google. Combining his technical background with a lifelong passion for music composition and performance, Jeremy earned a Master of Music degree from NYU Steinhardt’s Music Technology program, and now joins the University of California San Diego’s Music department as a PhD student in Computer Music.
Network modulation synthesis is a recent framework aimed at improving the usability and creative potential of autoencoders that produce musical audio. As a whole, the framework offers algorithms to improve parameter tweaking, create time-variant audio from non-autoregressive models, and synthesize multiple channels of monophonic audio linked by a shared audio target. This composition, bell / boom, heavily emphasizes the third use case. A complex synthesis tree of five channels, each using alterations of the network modulation synthesis algorithm, is used to create groups of audio samples that differ in timbre but are linked by their connection to the root audio. These audio groups, along with other standalone samples created using network modulation, are played back using a custom Max/MSP patch and a MIDI controller. The resulting composition features rich audio textures and allows for sequence and timing improvisation from the performer, while all audio in the piece was generated using algorithmic methods and the CANNe autoencoder for musical synthesis. Technical details of the network modulation synthesis algorithm can be found in the author’s upcoming ICMC paper.