Aural Intelligence: An artists approach to Neural Audio Synthesis for music and sound art

Aural Intelligence: An artists approach to Neural Audio Synthesis for music and sound art

03 – 05 Jun 2025
Bergen (NO)
Workshop

Join BEK for a workshop to dive into an artistic approach to A.I. and neural audio synthesis applied to sound and music.

The three-day workshop focuses on the workflow behind training generative AI models and developing an artistic practice using these tools, such as RAVE (Realtime Audio Variational autoEncoder) and open source multi-modal audio generative systems. This workshop offers participants the opportunity to understand and implement the pipeline of neural audio synthesis—from data collection and model training to artistic implementations for sound design and musical expression.

3 – 5 June 2025
10:00–15:00 each day, including a light lunch.
BEK in Bergen, Norway
Instructor: Hexorcismos aka Moisés Horta Valenzuela
The participation is free of charge.
Sign up via BEK. 

Throughout this three-day intensive workshop, participants will develop practical skills in gathering and preparing audio datasets specifically for training neural networks. You’ll learn to train custom RAVE models that capture unique sonic characteristics and explore the creative possibilities of latent space navigation, timbre transfer, and decoder-only generation. We’ll also investigate complementary architectures such as Stable Audio Open for text-to-audio generation, expanding the toolkit available for your sonic explorations.

A significant focus will be on SEMILLA.AI, a tool created by hexorcismos that enhances real-time manipulation of sound characteristics, allowing for deep interaction with the latent space of sounds. This musical instrument enables musicians and sound artists to navigate and transform sonic material with unprecedented fluidity. We will explore how SEMILLA.AI integrates with other neural audio tools to create dynamic, responsive sound environments that can respond to performer input in real time.

This workshop emphasizes hands-on experimentation, providing a blend of theoretical understanding and practical implementation. Each day builds toward greater creative autonomy, allowing participants to develop personalized approaches to neural audio synthesis that can be integrated into their artistic practice. Discussions will include both technical aspects of implementation and the aesthetic implications of these new tools for sound creation.

The workshop will culminate in a collaborative multi-neural network autopoietic listening and improvisation session, demonstrating the interactive and evolving capabilities of neural networks in a live setting, with feedback logics. Participants will also engage in discussions about the philosophical and ethical aspects of using AI in sound creation, ensuring a comprehensive understanding of the state of discourse in the field.

Requirements

Participants are expected to bring their laptops, alongside their own custom dataset, a corpus of audio of 1 hour or more in .wav format. Prior installation of Pure Data (free download via puredata.info). No programming experience is required, although knowledge in Python and Pure Data is certainly welcome.

Sign up and learn more on the website of BEK: