Expressive Music Data Processing and Generation with LSTM-Attention Neural Network

We will learn how to process MIDI-format music data with expressive micro-timing and generate expressive piano performance with symbolic data using LSTM-Attention recurrent neural networks.

Enrollment closed

Plan for the First Day

-> date: Saturday August 3rd

-> format: online in real time in ZOOM, time (4 hours with a break)

-> in short: Music data processing: we are going to prepare MIDI corpus for training with listening-based data processing.

-> start time: 8.00 by San Diego\Tijuana = 17.00 by Berlin

Plan for the Second Day

-> date: Sunday August 4th

-> format: online in real time in ZOOM, time (4 hours with a break)

-> in short: Generation with the neural network: we will have a detailed introduction to the neural network structure of the LSTM-Attention model. We will use the extracted music information to train it to generate new music.

-> start time: 10.00 by San Diego\Tijuana = 19.00 by Berlin

For the next 2 days participants will be assigned with a task, with supervision in Telegram 

-> Example task 1 : For the students with programming skills: build your own neural network to train the data.

-> Example task 2 : For the students with an art background: experiment with the model and generate music as you wish!

Final presentations and review

-> date: Wednesday 7th of August or other day (to be decided with the group)

-> format: asynchronous submission to the internal TG group with subsequent comments from the Tutor and other students.

Preferable requirements for students

Comfortable with Jupyter Notebook; Understand basic Python codes; Interested in symbolic music, especially MIDI.

Please check these resources before the workshop: Jingwei's paper about the subject

Jingwei is a Ph.D. candidate in computer music at UC San Diego. She got a bachelor's degree in Mathematics from Shanghai Jiao Tong University, China, and a master's in Mathematics from the University of Wisconsin-Madison, United States. Her work is to study humanity subjects (primarily music) using methods developed in quantitative analysis and numerical simulation in the STEM fields. Nowadays as the prevailing of neural networks, her research methods shifted towards neural networks. Practically, she also tries to build high-performance music generation and audio signal processing apps.

Jingwei Liu - theoretical researcher on music and minds. Practitioner at Neural Network programming and music generation.

Neural Networks for Sound Synthesis and Music Generation

In this workshop we will focus on generating audio spectrums, including multi-track music generation (stems). Additionally, we will practice with a user interface from the Hugging Face website, where amateur AI enthusiasts can generate their pieces without delving too deeply into Python programming.

Enrollment closed

Plan for the First Day

-> date: Saturday August 10

-> format: online in real time in ZOOM, time (4 hours with a break)

-> in short: In the first half, we will be introduced to the field of music generation and sound synthesis, covering its history and modern-day tools, including Neural Networks. In the second half Tornike will present his latest work on multi-track sound synthesis and talk about how it can be used in the process of music composition.

-> start time: 10.00 by San Diego\Tijuana = 19.00 by Berlin

Plan for the Second Day

-> date: Sunday August 11

-> format: online in real time in ZOOM, time (4 hours with a break)

-> in short: Day two we'll  start with a talk about Tornike's current work at Bose, sharing insights into cutting-edge advancements in the field. And then we will dive into practical exercises. We'll go through the process of using Python for music in the environment of Google Colab, a platform for machine learning. Additionally, more examples with an easy-to-use Hugging Face interfaces will be shown. We will explore the tools and have hands-on experience by building simple audio/music generation code. We will also talk about the possibility of breaking out from the AI art problem of emulating human art and music, and discuss the potential for creatively using AI and whether it is even possible.

-> start time: 10.00 by San Diego\Tijuana = 19.00 by Berlin

For the next 2 days participants will be assigned with a task, with supervision in Telegram 

-> Example task 1 : Building Mozart's "dice game" composition in Python.

-> Example task 2 : Building small Machine Learning models and training them on synthetic data.

-> Example task 3 : Composing a piece (hoplefully something non-standard) using neural synthesis with Hugging Face UI or Python.

Final presentations and review

-> date: Wednesday 14th of August or other day (to be decided with the group)

-> format: real-time in ZOOM

Preferable requirements for students:      Patience :)

Please check these resources before the workshop: 

An audio/music researcher, and data scientist specializing in deep neural sound synthesis techniques and computational creativity research for music/audio generation and composition. Tornike is a Computer Music Ph.D. candidate in UC San Diego, part of a research group working on reinforcement learning in musical improvisation, and a collaborator on a project about spatial audio HRTF personalization with deep learning.

Tornike Karchkhadze - musician and Machine Learning specialist.

Previously, he earned a Master’s degree in music from the Institute of Sonology, at the Royal Conservatoire in the Hague, Netherlands. He studied at the Sibelius Academy, University of the Arts Helsinki, for two years and received a bachelor's degree in music technology from the Tbilisi State Conservatoire. He also holds a Bachelor of Science in Physics from Tbilisi State University.

Currently, Tornike works at Bose as an Audio ML researcher. He has years of experience in creative musicianship (composition, production) in parallel with years of experience working in statistics, data science, and machine learning.

Worldmaking Jazz 1960s till Now

This workshop explores sounds and ideas of vanguard improvising musicians from the 1960s until today. These musicians have and continue to expand notions of geography, philosophy, spirituality, history, and futurity through music. We situate “worldmaking” as a site of social justice work, both political and speculative. Musicians explored serve as guides for reimagining past, present, and future.

Plan for the First Day

-> date: Saturday August 12

-> format: online in real time in ZOOM, time (4 hours with a break)

-> in short: Introduction, short lectures, listening sessions, and group discussion. We'll brainstorm ideas for our own worldmaking projects and investigate where these show up in musics we're already interested in.

-> start time: 8 am by SD\Tijuana = 9 pm by Almaty = 3 pm UTC/Europe

Plan for the Second Day

-> date: Sunday August 13

-> format: online in real time in ZOOM, time (4 hours with a break)

-> in short: We will have more lecture material, listening sessions, and extended group discussion. Finally we will workshop ideas for our own speculative worldmaking projects.

-> start time: 8 am by SD\Tijuana = 9 pm by Almaty = 3 pm UTC/Europe

For the next 4 days participants will be assigned with a task, with supervision in Telegram 

-> Example task 1 : Get comfortable with deep listening practice and share your experience.

-> Example task 2 : Consider potential worldmaking gestures in your own musical interests and describe them.

-> Example task 3 : Imagine and develop your own speculative worldmaking project.

Final presentations and review

-> date: Friday 18th of August or Saturday 19th of August (to be decided with the group)

-> format: real-time in ZOOM

Preferable requirements for students: "Decent headphones please!"

Please check these resources before the workshop:

Paul Nicholas Roth plays saxophone across popular and experimental music genres, composes, writes scholarship, and does community arts advocacy. Recent sounds are released through earwash records (of which he co-directs—earwash.bandcamp.com); academic projects are published in both Routledge and Indiana University press. He’s currently finishing PhD studies in UCSD’s department of music in the interdisciplinary musicological “Integrative Studies” program. He also curates for the nettnett collective in downtown Tijuana. From 2013-2017 he served on the curatorial team for leading experimental music venue “ausland” in Berlin. He has Masters degrees in both Performance and Musicology from the University of Nevada, Reno, and a Bachelors degree in jazz studies from the University of Miami, Florida.

Paul Nicholas Roth - Musician/scholar/organizer/advocate involved at various registers of music and art.

A Science and Technology Studies Approach to research in Electronic Music

This class will follow the history of electronic music technology looking at early innovations to modern musical instruments. The aim of this class is to expose students to methodologies that help situate electronic music instruments within their historical and sociopolitical contexts. We may expect to learn about history, musical genres, and key musical pieces surrounding electronic music instruments.

Plan for the First Day

-> date: Saturday August 19

-> format: online in real time in ZOOM, time (4 hours with a break)

-> in short: We will discuss historical electronic music instruments ranging from the 1920's to present day. We will identify key instruments, inventors, and musicians and connect them to musical genres and localities they are associated with. After the break we will look at different texts that discuss music technology from a theoretical perspective. Important names would be:
Trevor Pinch, James Mooney, Don Ihde, Walter Benjamin.

-> start time: 8 am by SD\Tijuana = 9 pm by Almaty = 3 pm UTC/Europe = 9 am Mexico city

Plan for the Second Day

-> date: Sunday August 20

-> format: online in real time in ZOOM, time (4 hours with a break)

-> in short: Each student will choose one instrument, innovator, or genre they wish to research and compile information about its historical context, locality, and person or persons credited to its invention. Students will generate a mock abstract for a paper that discusses this instrument. We will discuss methodologies and theoretical approaches.

-> start time: 8 am by SD\Tijuana = 9 pm by Almaty = 3 pm UTC/Europe = 9 am Mexico city

For the next 4 days participants will be assigned with a task, with supervision in Telegram 

-> Example task : Students can expect to be assigned readings, writing prompts, and research exercises to identify primary and secondary sources. There will also be listening sessions and discussions surrounding assigned prompts.

Final presentations and review

-> date: Friday 25th of August

-> format: asynchronous submission in Telegram group of the class with comments

Preferable requirements for students: "Interest in theoretical approaches to sound and music"

Pablo Dodero is a musician, writer, and arts promoter from Tijuana, Mexico currently pursuing his PhD in Integrative Studies at the University of California San Diego. His professional background situated in the cross-border region working in the music retail industry as an instrument buyer and repairperson, as well as a performer of experimental electronic music, led him to pursue graduate studies in the field of musicology and sound studies. His research focuses on electronic music instruments, specifically their interface and design along transnational circuits, as well as institutional histories of experimental and electronic musics within Mexico.

Pablo Dodero - Electronic Musician and Scholar from México.

Dodero is a DIY touring musician and has performed electronic music under the name Adiós Mundo Cruel and Les Temps Barbares in the underground rave scene in the U.S. and Mexico.