Loading…
Attending this event?
Monday, November 13
 

08:30 GMT

Workshop Breakfast
Monday November 13, 2023 08:30 - 09:00 GMT

09:00 GMT

Workshop Welcome & Introduction
Workshop Welcome & Introduction

Monday November 13, 2023 09:00 - 09:30 GMT
Online

09:30 GMT

Workshop: Dynamic Cast: Practical DSP & Audio Programming (ONLINE)
Dynamic Cast: Practical DSP and Audio Programming

We'll explore some concepts from basic string synthesis and look into possible implementations of it, while looking at best programming practice alongside. This will be a self-contained workshop aiming to be accessible to all levels of learning.

Dynamic Cast - Who Are We?

Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). The Dynamic Cast workshop at ADC is designed to create an entry point to the industry for newcomers, everyone is welcome.

Requirements for this Workshop
A laptop and paper/pen would be beneficial, but no one will be turned away. More information TBA before the workshop

Speakers
AK

Alex Korach

Software Developer, Ableton AG
avatar for Emma Fitzmaurice

Emma Fitzmaurice

QA Engineer, Focusrite
Emma Fitzmaurice is a QA engineer on the Novation team at Focusrite, sticking her fingers into as many parts as the hardware development pie as possible in an effort to make cool gear. She is charming, beautiful, wise and the proud author of her own bio.
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd


Monday November 13, 2023 09:30 - 12:30 GMT
TBA

09:30 GMT

Workshop: Build Your First Audio Plug-in with JUCE
Writing an audio plug-in can be a daunting task: there are a multitude of plug-in formats and DAWs, all with slightly different requirements. This workshop will guide you through the process of creating your first audio plug-in using the JUCE framework.

This workshop will cover:
- An introduction to JUCE
- Configuring a plug-in project
- Adding parameters to your plug-in and accessing them safely
- Creating a basic GUI
- Debugging and testing your plug-in

During the workshop, attendees will create a simple audio plug-in under the guidance of the JUCE developers.

Workshop Requirements:

Attendees must be able to compile the projects supplied in the most recent JUCE SDK using the corresponding IDE for their computer: Visual Studio 2022 for Windows, Xcode for macOS, and a Makefile for Linux. This may require installing Visual Studio 2022, Xcode or all of the Linux dependencies. There will not be time to do this within the workshop itself.

You can clone JUCE using git from here https://github.com/juce-framework/JUCE, or download the latest version of JUCE here https://github.com/juce-framework/JUCE/releases/latest.

Windows: Open JUCE\extras\AudioPluginHost\Builds\VisualStudio2022\AudioPluginHost.sln and build in Visual Studio 2022.

macOS: Open JUCE/extras/AudioPluginHost/Builds/MacOSX/AudioPluginHost.xcodeproj and build in Xcode.

Linux: Run make in JUCE/extras/AudioPluginHost/Builds/LinuxMakefile.

Download the workshop materialshttps://data.audio.dev/workshops/2022/build-first-plugin-with-juce/materials.zip

Speakers
avatar for Tom Poole

Tom Poole

Director, JUCE / ADC
Tom Poole is a director of the open source, cross platform, C++ framework JUCE (https://juce.com). Before focussing on JUCE he completed a PhD on massively parallel quantum Monte-Carlo simulations of materials, and has been a foundational part of successful big-data and audio plug-in startups... Read More →


Monday November 13, 2023 09:30 - 12:30 GMT
TBA

09:30 GMT

Workshop: Dynamic Cast: Practical DSP & Audio Programming (IN-PERSON)
Dynamic Cast: Practical DSP and Audio Programming

We'll explore some concepts from basic string synthesis and look into possible implementations of it, while looking at best programming practice alongside. This will be a self-contained workshop aiming to be accessible to all levels of learning.

Dynamic Cast - Who Are We?

Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). The Dynamic Cast workshop at ADC is designed to create an entry point to the industry for newcomers, everyone is welcome.

Requirements for this Workshop
A laptop and paper/pen would be beneficial, but no one will be turned away. More information TBA before the workshop

Speakers
AK

Alex Korach

Software Developer, Ableton AG
avatar for Emma Fitzmaurice

Emma Fitzmaurice

QA Engineer, Focusrite
Emma Fitzmaurice is a QA engineer on the Novation team at Focusrite, sticking her fingers into as many parts as the hardware development pie as possible in an effort to make cool gear. She is charming, beautiful, wise and the proud author of her own bio.
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd


Monday November 13, 2023 09:30 - 12:30 GMT
TBA

09:30 GMT

Workshop: GPU Audio
GPU based audio processing has long been considered something of a unicorn in both the Pro Audio industry as well as the GPU industry. The potential for utilizing a GPU’s parallel architecture is both exciting and elusive, due to the number of computer science issues related to working with sequential DSP algorithm design and the fundamental differences between MIMD and SIMD devices. Now possible, GPU-processed audio can offer processing power for any audio application that is orders of magnitude greater than CPU counterparts; fulfilling a cross-industry need that has quickly arisen as digital media content adopts AI, ML, Cloud-based collaboration, virtual modeling, simulated acoustics and immersive audio (to name a few). The state of research had previously concluded that because of heavy latencies and a myriad of computer science issues, DSP on GPUs was just not possible nor preferable. Recognizing the need to create a viable, low-level standard and framework for Real-Time professional GPU audio processing, GPU AUDIO INC set out to solve these fundamental problems.

The purpose of this workshop is to give you a hands-on experience for what GPU Audio processing solves, and what it can mean for your software and the future of audio. It is a taste of the GPU Audio SDK.

In this course you will learn about the fundamental problems solved by the new GPU Audio standard, go deeper into our core technology, and learn how to incorporate Real-Time/low latency, GPU-executed DSP algorithms into your projects. You will participate in a deep-dive hands-on tutorial in building a simple processor, implementing your own IIR processor, measure performance and playback, and “take home” the code to build an FIR processor. All made possible by the GPU Audio Scheduler. 

Prerequisite(s):
Familiarity with DSP algorithms and designs
Familiarity with modern SWE tools (IDEs, Git, CI/CD)
Note: a basic primer on elements of CUDA will be included in this workshop.

** This Training Lab is generously supported by NVIDIA & the Deep Learning Institute **

Monday November 13, 2023 09:30 - 12:30 GMT
TBA

09:30 GMT

Workshop: TBA
Monday November 13, 2023 09:30 - 12:30 GMT
TBA

12:30 GMT

Workshop Lunch
Monday November 13, 2023 12:30 - 14:00 GMT

14:00 GMT

14:00 GMT

14:00 GMT

Workshop: Analog Circuit Modelling for Software Developers using the Point-To-Point Library
During this workshop, participants will learn about digital modeling of analog circuits. This will be applied to the creation of several JUCE plug-ins. Traditional modeling techniques will be discussed along with the presentation of a circuit analysis library which automates the modeling process. This library, called "Point-To-Point Modeling," is intended for audio software developers interested in rapid prototyping and implementation of circuit modeling. Example JUCE plug-ins using the Point-To-Point library will be demonstrated, along with the process of quickly converting arbitrary schematics into C++ code.

  • Attendees should have some experience using JUCE
Code repository for the workshop:
https://github.com/HackAudio/PointToPoint_LT
Code repository as an additional resource:
https://github.com/HackAudio/PointToPoint_MATLAB

Monday November 13, 2023 14:00 - 17:00 GMT
TBA

14:00 GMT

Workshop: TBA
Monday November 13, 2023 14:00 - 17:00 GMT
TBA

14:00 GMT

Workshop: TBA
Monday November 13, 2023 14:00 - 17:00 GMT
TBA

16:00 GMT

Online Open House
We will be opening our virtual venue hosted on Gather Town to online attendees so that they can connect ahead of time to test things out, get familiar with the online conferences systems, as well as chat, socialize and interact with other attendees through a dynamic video chat system. Explore the venue, interact and have fun!

We will also open up access to the online conference web lobby page so you can also test this out and verify you are able to access the systems ahead of the event starting on Monday morning.

Online tech support will be available for the duration of this session, so we highly recommend all attendees take this opportunity to verify they can access the systems and troubleshoot any technical issues which might otherwise prevent or slow down access to the event.

Monday November 13, 2023 16:00 - 17:00 GMT
Gather Town

16:00 GMT

Socialize, Network & Explore The Virtual Venue
Interact with other attendees, visit our numerous exhibitors and their interactive exhibition booths and take part in a fun puzzle treasure hunt game during breaks in our scheduled content! Have you visited the cloud lounge yet?

Monday November 13, 2023 16:00 - 17:00 GMT
Gather Town

18:00 GMT

ADC Welcome Evening
Come one, come all, to the ADC Welcome Reception!

Whether it’s your first time attending ADC, or your ninth, meet and chat with fellow attendees at an informal gathering the night before the Audio Developer Conference main conference begins!

If you are new to ADC, this will be a wonderful opportunity to get to know more community members! Meet some new friends, and see them the very next day at the conference! Members of the ADC team will be there to welcome you, and pleased to make some friendly introductions.

If you are already well connected, we invite you to help us welcome new folks and make them feel comfortable among us.

Monday November 13, 2023 18:00 - 21:00 GMT
Strongroom Bar 120-124 Curtain Rd, London EC2A 3SQ, UK
 
Tuesday, November 14
 

08:00 GMT

Breakfast
Tuesday November 14, 2023 08:00 - 08:30 GMT

08:30 GMT

Welcome Address
IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Tuesday November 14, 2023 08:30 - 08:50 GMT
Track 1, Auditorium

09:00 GMT

An engineer’s guide to prototyping: building AI music tools for the 99%
How to go from idea, to lo-if prototype, to validation, to hi-fi prototype to production. 🚀

Exploring the method we used to develop and ship 3 large appeal consumer audio apps this year to millions of users.

Speakers
avatar for Jamie Pond

Jamie Pond

Software Engineer, mayk.it


Tuesday November 14, 2023 09:00 - 09:50 GMT
Track 3, Newgate

09:00 GMT

Building a high performance audio application with a web GUI and C++ audio engine
The era of using web UIs for audio applications is just beginning. How might we build a high performance audio application on the foundations of the JUCE web component ? How might we overcome some limitations of the JUCE web component, such as passing chunks of binary data to the GUI? How might we deal with the complexities of this dual-sided system written in two different languages? We have developed a solution for a high-performance application architecture including a C++ audio engine and a web GUI.

Both the C++ audio engine and web UI implement their own unidirectional data flow, and combine to form an application wide unidirectional data flow, allowing the GUI to send actions into the C++ application to initiate state changes. We will discuss tooling developed for sharing data types between the two languages without error-prone manual maintenance, as well as the communication protocol itself and how we overcame limitations by intercepting HTTP requests from the webview in the C++ application.

We will discuss the performance considerations of integrating the unidirectional data flow architecture with a real-time audio engine and the high-performance architecture of the Web GUI itself.

Speakers
avatar for Colin Sullivan

Colin Sullivan

Web Tech Lead, Output
I am a creative software developer with experience building music-related tools. Happy to chat software architecture esp. as it relates to systems, web technologies, and the future of real-time audio.
avatar for Kevin Dixon

Kevin Dixon

Senior Audio Software Architect, Output, Inc.
I've been building consumer and professional music applications for both desktop and mobile platforms since 2008. Originally started work on a streaming video platform for public safety, I was immediately exposed to the issues of communicating between a high-performance C++ engine... Read More →


Tuesday November 14, 2023 09:00 - 09:50 GMT
Track 4, Aldgate

09:00 GMT

Embedded software development - a wild ride!
Embedded software development (aka firmware) can be challenging, but it's incredibly rewarding. Sitting at the beating heart of all audio hardware products, it handles the UI, connects the physical and virtual, and transforms signals and sounds to bring the product to life.

Join us to hear about how it connects with the electronics, software development and QA worlds, and the fun you can have bringing Frankenstein to life!!

Speakers
SH

Simon Holt

Focusrite


Tuesday November 14, 2023 09:00 - 09:50 GMT
Track 2, Lower River Room

09:00 GMT

Real-time confessions: the most common “sins” in real-time code
This talk examines the most prevailing misconceptions and frequent errors encountered when audio developers handle real-time code in C++. With my background as a contractor in the audio industry, I’m often called in to help fix subtle bugs in, or review code with real-time constraints. Yet, I see (and have myself made) the same type of mistakes over and over again resulting from a few common misconceptions of real-time C++ code.This talk offers an in-depth analysis of each of these misconceptions, debunking them with compelling examples from the audio industry.

Ranging from the ignorance of C++'s data-safety rules altogether, to the overuse of std::atomic and misbeliefs about the forbidden usage of locks and exceptions in real-time code, this presentation navigates the landscape between the theoretical rules of the C++ standard and real-world practical realities. This talk is an essential guide for developers seeking to avoid common pitfalls and write more efficient, reliable real-time code.

Speakers
avatar for Fabian Renn-Giles

Fabian Renn-Giles

Contractor & Consultant, Fielding DSP GmbH
Fabian is a freelance C++ programmer, entrepreneur and consultant in the audio software industry. Before this, he was staff engineer at ROLI Ltd. and the lead maintainer/developer of the JUCE C++ framework (www.juce.com) - an audio framework used by thousands of commercial audio software... Read More →


Tuesday November 14, 2023 09:00 - 09:50 GMT
Track 1, Auditorium

10:00 GMT

Making it good (the principles of testing)
You take a great idea, work out the details, make it and everything is lovely. Except no. Be it hardware or software, seeing an idea through to a high-quality product requires constant vigilance. This session offers a look into the principles and techniques of testing and quality assurance.

Speakers
avatar for Emma Fitzmaurice

Emma Fitzmaurice

QA Engineer, Focusrite
Emma Fitzmaurice is a QA engineer on the Novation team at Focusrite, sticking her fingers into as many parts as the hardware development pie as possible in an effort to make cool gear. She is charming, beautiful, wise and the proud author of her own bio.


Tuesday November 14, 2023 10:00 - 10:50 GMT
Track 3, Newgate

10:00 GMT

Reactive embedded programming
An alternative approach to embedded programming suited to real-time and audio systems whereby the usual background polling is replaced with an entirely reactive structure.

How can we leverage an microcontroller's hardware for predictable scheduling? What would it look like to turn convention on its head and run our entire application in interrupts?

Speakers
TW

Tom Waldron

Embedded Design Consultant, Baremetal Dev Ltd


Tuesday November 14, 2023 10:00 - 10:50 GMT
Track 2, Lower River Room

10:00 GMT

Real-time inference of neural networks: a practical approach for DSP engineers
In upcoming audio processing innovations the intersection of neural networks and real-time environments is set to play a decisive role. Our recent experience of implementing neural timbre transfer technology in a real-time setting has presented us with diverse challenges. Overcoming them has provided us with significant insights into the practicalities of inferencing neural networks inside an audio plugin.

This talk presents a pragmatic approach: Starting with a trained model, we guide you through the necessary steps for inferencing the model in a real-time environment. For this we rely on the ONNX Runtime library, a common inference engine that provides a unified set of operators compatible with several frameworks, including PyTorch and TensorFlow. On our way we delve into the critical aspect of maintaining real-time safety, share proven strategies to ensure a seamless and uninterrupted signal flow. Moreover, we address the delicate balance between latency, performance, and stability.

Speakers
VA

Valentin Ackva

Audio Developer, INSONE
Hello, I'm Valentin, an audio programmer and electronic musician based in Berlin, Germany. My passion lies at the intersection of music, programming, and technology. Currently, I'm working towards a Master of Science degree in Audio Communication and Technology at the Technological... Read More →
FS

Fares Schulz

Student Assistant, Technische Universität Berlin
Hello, I work as a student assistant at the Electronic Studio of the Technische Universität Berlin. There I immerse myself in the world of audio programming while studying for my Master's degree in Audio Communication and Technology. During my previous studies in physics and audio... Read More →


Tuesday November 14, 2023 10:00 - 10:50 GMT
Track 1, Auditorium

10:00 GMT

Writing elegant DSP code with Rust
Rust has become an exciting alternative to C++ for audio programming. This talk will explain how Rust's unique type system can be leveraged to create elegant DSP code, with an emphasis on conciseness, clarity, and safety.

The talk will show that many features of audio programming DSLs can be achieved using advanced features of the Rust type system, and how Rust's zero-cost abstractions can be used to create DSP elements that are flexible, composable, and don't compromise performance. It will also show how to instantiate and implement audio processing graphs in imperative, functional, and declarative styles.

Speakers
CK

Chase Kanipe

Chase Richard Kanipe


Tuesday November 14, 2023 10:00 - 10:50 GMT
Track 4, Aldgate

10:50 GMT

Break
Tuesday November 14, 2023 10:50 - 11:20 GMT

11:20 GMT

A more intuitive approach to optimising audio DSP code - Guiding the compiler through optimising your code for you
As audio developers we all want our code to be blazingly fast, DSP code in particular. But when reading up on how to optimise audio DSP code, it is easy to get sucked into a world of counting divisions, vector instructions, compiler intrinsics and inline assembly, and think: this is impossible. These are techniques with a very steep learning curve and that require deep technical knowledge of how CPUs and compilers work. The resulting code is also often difficult to read, maintain, and possibly less flexible, as direct inline assembly or intrinsics are often tied to specific cpu architectures.

This talk will present a completely different approach to optimising, one that is more intuitive and accessible, and doesn’t trade speed for readability and maintainability of the code - Simply let your compiler do the hard work for you!

Compilers today are immensely good at optimising code. The difference between an optimised and un-optimised build of the same code can be an order of magnitude, if not more. Still there are things we as programmers can do when we write our code, that affects the level to which the compiler can optimise it.

In this talk we will talk about techniques compilers use to optimise code, and how to write code in a way that enables the compiler to optimise it as efficiently as possible. We will show useful patterns, and anti-patterns, that facilitate or hinder optimisation respectively. We will discuss how to benchmark and measure code and different kinds of bottlenecks, i.e. cpu/memory/pipeline bound code, and how to get the compiler to tell us when it is not able to optimise efficiently.

We will go through a few case studies comparing the performance and generated assembly code, before and after optimisation techniques have been employed. We will also take a look at how using functions from the c++ standard library compares to writing your own functions.

The main focus will be on optimising small, tight loops of audio DSP code that generally run directly from cache. The focus will not be on optimising higher level architecture, memory layout or cache-friendliness.

The talk will come with a companion repository posted on github.

Speakers
avatar for Gustav Andersson

Gustav Andersson

Senior Software Developer, Elk Audio
Will code C++ and python for fun and profit. Developer, guitar player and electronic music producer with a deep fascination with everything that makes sounds in one form or another. Currently on my mind: modern C++ methods, DSP algos, vintage digital/analog hybrid synths.


Tuesday November 14, 2023 11:20 - 12:10 GMT
Track 1, Auditorium

11:20 GMT

Industry standards - the agony and the ecstasy
The music technology ecosystem is reliant on interoperability mediated via standards.

But have you ever really considered the implications? What are the implications of building projects and environments out of plug-ins? Why are we still stuck with the MIDI protocol from 1983? Where's it all going next?

In this talk I'll cover a brief history of standards in our industry, consider what features you should look for when evaluating plug-in APIs, and provide a quick overview of where it may be going next with emerging technologies like MIDI 2.0 and Web Audio Modules.

Speakers

Tuesday November 14, 2023 11:20 - 12:10 GMT
Track 4, Aldgate

11:20 GMT

Legacy code for the learning engineer
Legacy code is code that works. But sometimes, it could work better. We want it to behave a little differently or be more perfomant. Maybe it could be easier to understand and maintain. Whatever the reason, making changes to a legacy system can be daunting, as doing so is almost always more complex than writing something new. The challenge is increased when unfamiliar with the code base, or inexperienced with these kinds of projects in general. It might be a slog. Do we still want to change the code? Probably!

In this talk I explore what we can learn about our codebases and engineering practices by working with legacy code. I present a large refactoring project I undertook in Ableton Live's 20+ year old codebase as a case study. Why did I do it? What did I learn? How did it turn out? What would I do differently next time? These questions are explored with an emphasis not just on doing this kind of work effectively, but also for figuring out when doing it is right for you as an individual and right for your team.

Speakers
avatar for José Díaz Rohena

José Díaz Rohena

Software Developer, Ableton AG
I've been working on audio software for about 4 years, first making audio plugins at Newfangled Audio—now working on Ableton Cloud at, you guessed it, Ableton. I got into all of this as a musician, which I still am, but these days I'm more interested in making tools for others than... Read More →


Tuesday November 14, 2023 11:20 - 12:10 GMT
Track 3, Newgate

11:20 GMT

Music rendering in Unreal Engine: the Harmonix music plugin For MetaSounds
MetaSounds is Unreal Engine's graphical audio authoring system. It provides audio designers the ability to construct powerful procedural audio systems that offer sample-accurate timing and control at the audio-buffer level. Harmonix, the game studio behind the rhythm action games Rock Band and Dance Central, and the music mashup games Drop Mix and Fuser, joined Epic Games in the winter of 2021. Since the acquisition, the Harmonix audio development team has been hard at work building music specific plugins for MetaSounds that add tight musical synchronization and rendering.

In this session, the technical lead of this team will give an overview of the problem space (tightly coupled audio/visual/gameplay synchronization in single-player and multi-player games), describe the ways in which they have been able to extend the MetaSounds system with a set of custom plugins, and demonstrate the functionality these plugins add to MetaSounds and the Unreal Engine.

Speakers

Tuesday November 14, 2023 11:20 - 12:10 GMT
Track 2, Lower River Room

12:20 GMT

Developing an AI-powered karaoke experience
Karaoke has been of popular interest for many years, from the first karaoke bars in the 1970s to the karaoke video games of today, and the recent progress in deep learning technologies has opened up new horizons. Audio source separation and voice transcription algorithms now give the opportunity to create a complete karaoke song, with instrumental track and synchronised lyrics, from any mixed music track. Real-time stems remixing, pitch and tempo control, and singing quality assessment are other useful audio features to go beyond the traditional karaoke experience. In this talk we will discuss the challenges we had to tackle to provide our users with a fully automatic and integrated karaoke system adapted for both mobile and web platforms.

Sponsors
avatar for Clément Tabary

Clément Tabary

ML Engineer, MWM
Clément is a deep-learning research engineer at MWM. He applies ML algorithms to a wide range of multimedia fields, from music information retrieval to image generation. He's currently working on audio source separation, music transcription, and automatic DJing.
avatar for Thomas Hézard

Thomas Hézard

Head of Audio R&D, MWM
Thomas leads the Audio Research & Development team at MWM, working with his team on innovative signal processing algorithms and their optimised implementation on various platforms. Before joining the MWM adventure, Thomas completed a PhD on voice analysis-synthesis at IRCAM in Paris... Read More →


Tuesday November 14, 2023 12:20 - 12:50 GMT
Track 2, Lower River Room

12:20 GMT

Sponsored Talk TBA
Tuesday November 14, 2023 12:20 - 12:50 GMT
Track 1, Auditorium

12:20 GMT

Sponsored Talk TBA
Tuesday November 14, 2023 12:20 - 12:50 GMT
Track 3, Newgate

12:20 GMT

12:50 GMT

Lunch
Tuesday November 14, 2023 12:50 - 14:00 GMT

12:50 GMT

Socialize, Network & Explore The Virtual Venue
Interact with other attendees, visit our numerous exhibitors and their interactive exhibition booths and take part in a fun puzzle treasure hunt game during breaks in our scheduled content! Have you visited the cloud lounge yet?

Tuesday November 14, 2023 12:50 - 14:00 GMT
Gather Town

13:05 GMT

ADC Online Booth Tour
Join our ADC Online host Oisin Lunny for a guided tour of the ADC22 virtual venue on Gather,

Please meet at the ADC22 Gather central meeting point (by the large ADC22 logo in front of the Apple exhibit booth).

Tuesday November 14, 2023 13:05 - 13:30 GMT
Gather Town

13:35 GMT

ADC Online Booth Tour
Join our ADC Online host Oisin Lunny for a guided tour of the ADC22 virtual venue on Gather,

Please meet at the ADC22 Gather central meeting point (by the large ADC22 logo in front of the Apple exhibit booth).

Tuesday November 14, 2023 13:35 - 14:00 GMT
Gather Town

14:00 GMT

Aliasing, oversampling, and you
Aliasing distortion and oversampling have become especially popular discussion topics in the wake of widely-available consumer analysis tools such as Plugin Doctor. In response, audio software users have been more vocal about aliasing, in many cases asking audio developers to provide oversampling.

As it turns out, not all oversampling is the same, we face several trade-offs when implementing it, and we don’t always need it. This talk aims to help you understand aliasing distortion, how oversampling addresses it, and some potential downsides of overuse. It also provides guidelines to help you decide when oversampling is necessary, what kind of oversampling to use, and mentions some open-source libraries that give you flexibility to make the right decision for your software.

Speakers
avatar for Sam Fischmann

Sam Fischmann

Co-Founder, Musik Hack


Tuesday November 14, 2023 14:00 - 14:50 GMT
Track 2, Lower River Room

14:00 GMT

Collaborative songwriting and production with symbolic generative AI
Generative AI has experienced remarkable advancements in various domains, including audio and music. However, despite these breakthroughs, we have yet to reach a stage where musicians can seamlessly incorporate generative AI into their creative processes. In this talk, we will delve into the techniques, proposals, and ongoing work that can facilitate collaborative songwriting and production with machine learning.

During the session, we will explore several key topics:
  • Overview of existing tools and models - we will discuss the motivations behind symbolic generation versus raw audio for music production applications. Furthermore, we will highlight the contrasting approaches and techniques that aim to augment the creative process rather than replace it entirely.
  • Utilization of AI-generated MIDI as a songwriting tool - this will involve examining different ML architectures for conditional MIDI generation, as well as employing reinforcement learning (RL) to generate MIDI sequences.
  • Examples showcasing how speakers and other musicians currently utilize AI-generated MIDI as part of their songwriting/production process.

Attendees will gain insights into cutting-edge techniques and research, paving the way for a future where generative AI becomes an integral part of the creative process for musicians.

Speakers
SA

Sadie Allen

PhD Student, Boston University


Tuesday November 14, 2023 14:00 - 14:50 GMT
Track 4, Aldgate

14:00 GMT

RADSan: a realtime-safety sanitizer
"ERROR: RealtimeSanitizer: call to malloc detected during execution of realtime function SketchyAlgorithm::process!"

We present RADSan, a realtime-safety sanitizer integrated into the LLVM project. Activated with a single argument to clang, RADSan allows developers to mark any function with a [[realtime]] attribute. At run time, realtime functions will error if RADSan detects activity that it knows is not realtime-safe.

Our talk will include:
  • an introduction to sanitizers; what they do and how they work,
  • an exploration of the realtime-safety testing problem space; what is (and what isn't) possible, and how this influenced RADSan's design,
  • a deep-dive into the components of RADSan and how they work together
  • a demonstration of how to use RADSan to mark functions as realtime and test a system's realtime safety
  • an open discussion on how to best share this idea with the wider community

Speakers

Tuesday November 14, 2023 14:00 - 14:50 GMT
Track 3, Newgate

14:00 GMT

Recent Updates to MIDI 2.0 and the newest MIDI 2.0 APIs in the Apple, Google, Linux and Microsoft operating systems
Members of the MIDI Association will provide a high-level overview of the latest updates to MIDI 2.0 specifications and the brand new MIDI 2.0 APIs in the Apple, Google, Linux, and Microsoft operating systems including the Windows Open Source driver funded by AMEI, the Japanese MIDI organization.

There will be overview presentations on the Piano, MPE, Orchestral Articulation, and Camera Control Profiles, and the Network Transport specifications all of which are nearing completion.

We will briefly explain the MIDI Association MIDI 2.0 logo licensing program.

Most importantly, we will explain how developers can get access to the MIDI 2.0 tools and open-source code that the MIDI Association and our members are making available to both MIDI Association members and the larger MIDI development community.


Tuesday November 14, 2023 14:00 - 14:50 GMT
Track 1, Auditorium

15:00 GMT

Bug-free audio code: leverage simple DSP principles to write rock-solid music software every time
How many times have we compiled our audio plugin or app and launched it only to find out that "something is glitching"?

How can we be sure that we submit a correctly implemented audio feature in a pull request?

And how can we detect existing problems and precisely locate them in our codebase?

I've been pursuing the answers to these questions from my day 1 in the audio industry and in this talk I am going to share with you my personal favorites. This is not just from my experience; I've actively asked community members for their strategies at every opportunity I got. This is a collected wisdom of more than just a single developer.

In the talk, I will:
  1. outline why shipping bug-free code is vital for your business and your own sanity,
  2. show you simple yet profound strategies for ensuring software correctness based on digital signal processing (DSP), including:
    1. taking advantage of phase cancellation properties,
    2. leveraging the power of the FFT for frequency-manipulation algorithms,
    3. underused DSP tools like total harmonic distortion (THD), Farina sweep, and pole-zero plot,
    4. discovering the power of reference audio renders.
  3. explain how to safely reuse stable and tested DSP code of other developers,
  4. interoperate C++ and Python for accessing powerful numerical libraries and testing complicated scenarios,
  5. correctly implement even the most advanced DSP algorithms and optimizations,
  6. show real-world examples where these principles helped solve hidden audio bugs, including
    1. pitch tracking,
    2. pitch shifting, and
    3. the flanger effect.
With this knowledge you'll be able to:
  1. approach developing new features with greater confidence,
  2. enjoy more inner peace during development, review, and deployment,
  3. write correct audio code every time!

Speakers
avatar for Jan Wilczek

Jan Wilczek

Lead Educator & Software Consultant, WolfSound
Jan Wilczek graduated with honors from Friedrich-Alexander-Universität Erlangen-Nürnberg, having completed a master’s program Advanced Signal Processing and Communications Engineering.He is an Audio Developer of Music Maker JAM at Loudly GmbH in Berlin; an app to make loop-based... Read More →


Tuesday November 14, 2023 15:00 - 15:50 GMT
Track 1, Auditorium

15:00 GMT

Vars, values and valuetree's: Managing state with JUCE
Managing various types of state, including global settings, presets, UI state, and application state, is crucial for building robust software. But, it comes with a challenge. State must be synchronized throughout the app. State must be stored and restored. Temporary state, like a level meter, or keyboard, must be stored somewhere. There seems to be a lot of debate about what the best way is to solve these problems. The concept of centering your app around a model, a single point of truth, has long been recognized as a powerful pattern, often referred to in the context of MVC (Model-View-Controller). Adopting this approach will make extending and maintaining your software simpler. In JUCE you will find many classes designed for dealing with state, such as the AudioProcessorValueTreeState, ApplicationProperties, Value and Vars. Using a real world example, this talk will serve as a guide for how to write a scaleable data model for your JUCE application.

Speakers
JB

Jelle Bakker

JUCE developer, JB Audio


Tuesday November 14, 2023 15:00 - 15:50 GMT
Track 3, Newgate

15:00 GMT

Virtual acoustics: recreating natural phenomena in the digital domain
Audio in the VR / AR domain may become a dominant section of this industry in the coming years. While methods for processing spatial audio already exist (and formal standards have been created and utilized) another factor of immersive audio remains (mostly) untouched: Virtual acoustic systems that mimic the virtual spaces.

This talk will go over various techniques for handling room acoustics in both real-time and offline settings, and going over both their benefits, as well as their drawbacks. Additionally, we will look over how to go about approximating room acoustics without breaking the proverbial (cpu) bank.

Speakers
avatar for Aidan Baker

Aidan Baker

Software Developer, Lese Audio Technologies
I am an audio developer who is fascinated by sound's physical phenomena. Presently I run a company called Lese which develops audio plugins + acoustic simulation software.


Tuesday November 14, 2023 15:00 - 15:50 GMT
Track 2, Lower River Room

15:00 GMT

Diversity in music technology: Initiatives and insights from Music Information Retrieval
Like many STEM fields, music technology faces challenges attracting and retaining diverse community members. Since 2011, the International Society for Music Information Retrieval (ISMIR) and Women in Music Information Retrieval (WiMIR) communities have sought to address this issue through a series of initiatives – ranging from financial support to attend ISMIR conferences to workshop events and mentorship – that were launched to promote opportunities for women in the field. In this talk I will discuss the initiatives: Their motivation and formation, complementary aims, success factors, and evolution toward supporting a broader range of underrepresented groups. I will conclude with a set of insights that may inform the design of diversity initiatives in other music technology communities.

Speakers

Tuesday November 14, 2023 15:00 - 15:50 GMT
Track 4, Aldgate

15:50 GMT

Break
Tuesday November 14, 2023 15:50 - 16:20 GMT

16:20 GMT

Sponsored Talk TBA
Tuesday November 14, 2023 16:20 - 16:50 GMT
Track 3, Newgate

16:20 GMT

Sponsored Talk TBA
Tuesday November 14, 2023 16:20 - 16:50 GMT
Track 4, Aldgate

16:20 GMT

Sponsored Talk TBA
Tuesday November 14, 2023 16:20 - 16:50 GMT
Track 2, Lower River Room

16:20 GMT

Sponsored Talk TBA
Tuesday November 14, 2023 16:20 - 16:50 GMT
Track 1, Auditorium

17:00 GMT

KEYNOTE: Topic to be confirmed
TBA

Speakers
AX

Anna Xambó Sedó

Senior Lecturer in Music and Audio Technology, De Montfort University


Tuesday November 14, 2023 17:00 - 18:00 GMT
Track 1, Auditorium

18:00 GMT

iLOK Waterfront Social Mixer
Visit the Gather iLOK Waterfront for an informal social mixer before the start of the rebroadcast schedule. Hang out, play games with other attendees and catch up with old friends.



Tuesday November 14, 2023 18:00 - 18:30 GMT
Gather Town

18:00 GMT

Diversity In Audio Reception
Tuesday November 14, 2023 18:00 - 19:30 GMT

18:00 GMT

Evening Meal & Networking
Tuesday November 14, 2023 18:00 - 19:30 GMT

18:30 GMT

Open Mic Night (Online)
The ADC Open Mic Night comes to Gather for our online conference attendees! A fun, informal online event with lightning talks, music performances, and some impromptu standup comedy.

If you are attending the ADC online, you can contribute to the online Open Mic night with a 5 minute talk or performance! Please use the sign up form here

This is an event exclusively for our online attendees. It won't be recorded, published, or streamed.

Tuesday November 14, 2023 18:30 - 20:00 GMT
Gather Town

19:30 GMT

The ADC Quiz
Join us and test your knowledge of music, lyrics, random facts and more at the ADC Quiz!
Bring your friends, or meet new ones as you work in teams to win incredible prizes.

Tuesday November 14, 2023 19:30 - 21:00 GMT

21:00 GMT

Networking
Tuesday November 14, 2023 21:00 - 22:00 GMT
 
Wednesday, November 15
 

08:30 GMT

Breakfast
Wednesday November 15, 2023 08:30 - 09:00 GMT

09:00 GMT

Fast audio thread synchronization for GPU data
While building my GPU-based physics simulation instrument Anukari (https://www.youtube.com/watch?v=nUO6iMcbao4), I had to solve a number of significant challenges, and I'll explain my solutions in this talk. The talk is not about Anukari per se; rather it's about some of the interesting solutions I developed as part of building it.

One challenge had to do with synchronizing the Anukari's data model from the GUI thread to the audio thread. Anukari models arbitrary networks of masses and springs, and can simulate close to a thousand masses and many thousands of springs. This data model is thus rather large, and it was nontrivial to provide model updates from the GUI to the audio thread. I designed a reliable wait-free approach that works without mutexes, without memory allocation, has support for transactions, and with minimal data transfer between threads. I will discuss a few technologies that I used together, including wait-free SPSC queues, the difference between wait-free and lock-free algorithms, reasons for avoiding mutexes and memory allocations, and custom data structures for avoiding memory allocations.

Another challenge was in regards to running GPU physics simulations at audio sample rates (48kHz). Memory bandwidth was a major issue, as well as kernel execution latency and cross-GPU-thread synchronization. I will discuss the OpenCL language and its limitations, the approaches I used to deal with OpenCL kernel execution latency, and the on-GPU thread synchronization, and memory optimizations..

And, of course, I will show how all of this ties together into a reliable system for synchronizing the GUI and audio threads with no waits, despite a large data model and compute-intensive physics simulation.

Prerequisite(s): Familiarity with C++ programming Familiarity with thread synchronization primitives like mutexes

Speakers
avatar for Evan Mezeske

Evan Mezeske

Solo Entrepreneur, Anukari Music
Evan Mezeske is a software engineer and amateur musician based out of Arizona, USA. He spent the last 10 years working as a senior engineering leader on large-scale distributed systems at Google before defecting in early 2023 to found his music software company, Anukari Music. Anukari's... Read More →


Wednesday November 15, 2023 09:00 - 09:50 GMT
Track 2, Lower River Room

09:00 GMT

Properties of chaotic systems for audio
Chaotic systems appear naturally in sufficiently complex interactions, whether in electrical circuits, classical mechanics or entirely invented scenarios. It is therefore no surprise that people realised the potential of such systems for generating and transforming sound in unique and creative ways.

However, it is not easy to explore the topic using intuition alone. It is prudent to follow any theoretical introduction with interactive tools capable of visualising phase plots, tracking nonlinear orbits and estimating numerical properties. For this reason, we will provide code examples for all systems presented in the talk.

After a short dive into fixed points and bifurcation, we will show practical examples of chaotic systems. Notably, we will focus our attention on modding/bending them to achieve musically relevant outcomes. We will tame chaos, reining it in and making it work for us.

Next, we will tie back the theory to differential equations. There, we will discover the direct implementation of a chaotic system with an analog circuit.

Finally, provided there is enough time, we will move onto more advanced topics: measuring fractal dimensions, introducing/removing synchronisation in dynamic fashion and producing delay coordinate maps.

Speakers

Wednesday November 15, 2023 09:00 - 09:50 GMT
Track 4, Aldgate

09:00 GMT

The architecture of digital audio workstations (and other time-based media software)
The ADC community has produced a wonderful wealth of material on audio software development! But there is a relative dearth on “the big picture”: of how all these coding techniques, practices, strategies, and design patterns, can interrelate, giving rise to the complex beast that is a modern Digital Audio Workstation (DAW).

While there are some open-source DAWs to study, there is little material on their architecture, apart from the source code itself - with the main exception being the (GUI-less) Tracktion engine of course.

Although implicit / emergent architecture may be sufficient for small to medium size codebases, a large codebase such as a DAW demands deliberate attention to design.

We present the low-level design patterns for the DAW engine and presentation layers, the UI/UX design patterns these interrelate to, and the architectural design patterns for the complete system. Crucially, the main emphasis of our talk is not the details of the above, but how they all together define a modern DAW.

We then present the challenges faced in defining such an architecture to satisfy the specific “Quality Attributes” of a DAW - e.g. a non-destructively alterable model, and the real-time constraints that necessitate lock-free communication between threads. We discuss the compromises needed to satisfy such conflicting needs, and some future challenges presented, as the software category evolves into the future, e.g. with MIDI 2.0 around the corner.

While we concentrate on DAWs, much of this discussion also generalises to the broader category of Time-Based Media software.

The presentation is grounded in two DAW-like applications we have developed: one is a desktop application with a GUI, and the other is a “headless”, embedded DAW, with a separately executed GUI application. They are both very different, each lacking central features that the other has. But together, and even more so through their differences, they serve as great illustrations of the concepts we present.

This subject area is vast, and a review of every topic and technique is impossible in the scope of a single talk. We give a good introductory overview, hopefully laying a foundation for further learning and knowledge dissemination in the community.

Speakers
avatar for Ilias Bergström

Ilias Bergström

Senior Software Engineer, Elk Audio
Senior Software Engineer, ElkComputer Scientist, Researcher, Interaction Designer, Musician, with a love for all music but specially live performance. I've worked on developing several applications for live music, audiovisual performance, and use by experts, mainly using C++.I get... Read More →


Wednesday November 15, 2023 09:00 - 09:50 GMT
Track 1, Auditorium

09:00 GMT

The sound of audio programming - developing perfect glitch
Audio programming mistakes can produce very interesting sounds. In this talk we are going to look at these mistakes and even listen to them. We’ll try to identify some of the coding errors solely by ear and develop “perfect glitch”. Some examples that we will examine: clipping, discontinuity, aliasing, phase cancellation, latency issues, buffering problems. Through practical demonstrations, we will not only listen to these unique sounds but also learn how to recognize them in our own audio projects. Moreover, we will delve into techniques to mitigate and avoid these typical problems.

Speakers
avatar for Balazs Kiss

Balazs Kiss

VP of Engineering, Synervoz


Wednesday November 15, 2023 09:00 - 09:50 GMT
Track 3, Newgate

10:00 GMT

Creating ubiquitous, composable, performant DSP modules
Companies and independent developers don't restart from scratch at each new project. They rely on a reusable technological base and build their final products upon that. For most software development tasks it is absolutely normal to use libraries developed by external suppliers, but for a number of very specific reasons this is less common when it comes to music DSP.

In a way, this is the sequel to my previous ADC talk. I'll show how my company, following my own advice, managed to create a toolkit of actually (re)usable music DSP algorithms while featuring unprecedented levels of ubiquity, composability, and performance.

In this talk I'll describe the cultural, architectural, and technical challenges we faced and the solutions we adopted in detail, especially with respect to:
  • choice of DSP algorithms
  • inadequancies and limitations of general-purpose programming languages
  • minimizing reliance on programming language and target platform features
  • designing consistent, performant, and unopinionated APIs
  • running identical code on all platforms, from microcontrollers to the web, including desktop and mobile
  • integration with external tools

Speakers
avatar for Stefano D'Angelo

Stefano D'Angelo

Founder CEO, Orastron Srl unipersonale
I am a music DSP researcher and engineer, as well as the founder and CEO of Orastron. I help companies around the world, such as Arturia, Neural DSP, Darkglass Electronics, and Elk, in creating technically-demanding digital synthesizers and effects. I also strive to push audio technology... Read More →


Wednesday November 15, 2023 10:00 - 10:50 GMT
Track 1, Auditorium

10:00 GMT

Lessons learned from implementing a real-time multichannel audio application on Linux
Linux-based computing platforms are extremely popular to implement audio processing in embedded systems, from low power consumer devices running on ARM processors to professional multichannel solutions requiring the power of x86 based chips.

In this talk we will explore the different features that the Linux kernel offers to control real time performance and ensure glitch-free audio processing. We will study examples from a commercially available and actively maintained product, including successes and failures.
Topics that we will look at include:

  • Linux kernel Real-Time patch
  • Controlling thread real time priority and CPU affinity
  • Measuring performance
  • Common pitfalls

Speakers
avatar for Olivier Petit

Olivier Petit

Head of Creative Software, L-Acoustics
After an MSc in Integrated Circuit design, I have joined the Creative Technologies department of L-Acoustics in 2018 as a C++ software engineer.  I have been taking an active part in developing innovative technologies to bring immersive audio to live performances, striving to better... Read More →


Wednesday November 15, 2023 10:00 - 10:50 GMT
Track 2, Lower River Room

10:00 GMT

Odd challenges of designing a feedback delay network reverb with deep learning
Past lustrum have seen the rise of interest in optimization of audio effects and synthesizer parameters in use cases including parameter inference from audio input, as well as approaches for Differentiable Digital Signal Processing (such as Magenta's DDSP). However, there are still notable limitations in the area, exemplified well by the problems posed by some fundamental DSP units such as IIR filters - issues of stability, interpretability and differentiability.

In this talk, we will take on all of the above. It will be done so in the context of a research endeavour into modelling room Impulse Responses using Feedback Delay Network (FDNs). Covering a range of approaches, from naive to more advanced, we will take multiple detours to look into machine learning challenges in context of direct applications to DSP, such as approximating common transformations, tackling computational efficiency, taming the explosivity of feedback systems, at last, hopefully, differentiating the undifferentiable.

Speakers
WK

Wojciech Kacper Werkowicz

Masters Research Student, Institute of Sonology, Royal Conservatoire The Hague


Wednesday November 15, 2023 10:00 - 10:50 GMT
Track 3, Newgate

10:00 GMT

Putting together a software beta testing team
Putting together functional teams is hard. It is especially difficult when your team comprises of external software beta testers ie those testers outside of your organisation. These are the people who, usually free of charge, will help identify bugs, glitches and usability issues within the software before it is released to the public. They are essential as beta testing is an important part of the applications or plug-ins developmental roadmap.

Leading the charge would be the beta software test manager who is, ideally, looking for a team of diverse, motivated testers, who are fully engaged on the beta forum, and are feeding back relevant, well thought out reports within a set timescale.

Unfortunately, it is not an ideal world and the talk will tackle the challenges of putting together an external software beta team, who are by definition remote, and then managing it.

Also explored will be why beta teams fail, the qualities of the person you would like on your beta team and look at a software tester's motivations for signing up.

Challenges in building a team include: Arranging a contract - the legal legwork The hiring process - ideal candidates Training - onboarding Reporting process - the forum Evaluation process - are you getting what you need? The beta software test manager - what every team needs

Speakers
avatar for Karen Down

Karen Down

Director, The Support Squad
My passion, fuelled by good coffee, has always been Support – firstly as an assistant audio engineer, to many years providing operational training and technical support on large format analogue and digital audio consoles. It has included working with software developers and companies... Read More →


Wednesday November 15, 2023 10:00 - 10:50 GMT
Track 4, Aldgate

10:50 GMT

Break
Wednesday November 15, 2023 10:50 - 11:20 GMT

11:20 GMT

Building an accessible JUCE app
During this talk we will investigate what goes into making an accessible JUCE app, both in the design and in the code. We’ll go over component grouping and hierarchies, keyboard focus orders, accessibility handlers and more, using real-world case studies and concrete examples.

Most of the session will be about screen reader accessibility, since you may be new to using an accessibility API. We will however also briefly visit topics such as using colours, localisation and web technology, as well as responding to user feedback.

The talk is designed for people who may not know where to begin when building an accessible app with JUCE, or simply for those who would like to hear some perspectives regarding creating accessible audio apps.

Speakers
avatar for Harry Morley

Harry Morley

Software Developer, Focusrite
Harry has been a software developer at Focusrite for 4 years. He mainly works on C++ software that interacts with audio hardware, such as the Vocaster and Scarlett interfaces.Harry loves talking all things music, creativity and accessibility. Before Focusrite, Harry studied MA Computational... Read More →


Wednesday November 15, 2023 11:20 - 12:10 GMT
Track 3, Newgate

11:20 GMT

Digital modelling of the Roland RE-201
This talk will discuss digital modelling of the RE-201, breaking down the subsystems present within the device and challenges that arise in acquiring total perceptual accuracy in software simulations. Comparisons of various methods and discussion of the positives and negatives of each method will be featured in the talk.

Speakers

Wednesday November 15, 2023 11:20 - 12:10 GMT
Track 4, Aldgate

11:20 GMT

Inference engines and audio
Machine learning has become a buzzword in recent years, but how does it actually work? This talk aims to introduce and explain inference pipelines. We’ll look at commonly used inference engines, how they work, their suitability for use in audio applications, and how to go about creating your own.

Also introduced will be an approach to writing a custom inference engine for the Cmajor platform.

Speakers
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd


Wednesday November 15, 2023 11:20 - 12:10 GMT
Track 2, Lower River Room

11:20 GMT

Why you shouldn’t write a DAW
There are surprisingly few DAWs in the music making world, especially when compared to the number of audio plugins on the market. Why is this? Could it be that all the DAWs in existence are perfect and there’s no need for another one? Perhaps there’s another reason…

In this talk we dive behind the UI/UX to take a deeper look at the technology that underpins DAWs. We’ll take a tour of some of the problems they solve, often transparently to the user, and some of the technical concepts they have to navigate in order to keep music makers in the groove.

Finally, we look at what alternatives there might be if you want to build a product that looks a bit like a DAW and why not building from scratch might save you a lot of time and money.

Speakers
avatar for David Rowland

David Rowland

CTO, Tracktion
Dave Rowland is the CTO at Audio Squadron (owning brands such as Tracktion and Prism Sound), working primarily on the digital audio workstation, Waveform and the engine it runs on. Other projects over the years have included audio plugins and iOS audio applications utilising JUCE... Read More →


Wednesday November 15, 2023 11:20 - 12:10 GMT
Track 1, Auditorium

12:20 GMT

Sponsored Talk TBA
Wednesday November 15, 2023 12:20 - 12:50 GMT
Track 2, Lower River Room

12:20 GMT

Sponsored Talk TBA
Wednesday November 15, 2023 12:20 - 12:50 GMT
Track 3, Newgate

12:20 GMT

Sponsored Talk TBA
Wednesday November 15, 2023 12:20 - 12:50 GMT
Track 1, Auditorium

12:20 GMT

Sponsored Talk TBA
Wednesday November 15, 2023 12:20 - 12:50 GMT
Track 4, Aldgate

12:50 GMT

Lunch
Wednesday November 15, 2023 12:50 - 14:00 GMT

12:50 GMT

Women in Audio Working Lunch

Wednesday November 15, 2023 12:50 - 14:00 GMT

12:50 GMT

Socialize, Network & Explore The Virtual Venue
Interact with other attendees, visit our numerous exhibitors and their interactive exhibition booths and take part in a fun puzzle treasure hunt game during breaks in our scheduled content! Have you visited the cloud lounge yet?

Wednesday November 15, 2023 12:50 - 14:00 GMT
Gather Town

13:05 GMT

ADC Online Booth Tour
Join our ADC Online host Oisin Lunny for a guided tour of the ADC22 virtual venue on Gather,

Please meet at the ADC22 Gather central meeting point (by the large ADC22 logo in front of the Apple exhibit booth).

Wednesday November 15, 2023 13:05 - 13:30 GMT
Gather Town

14:00 GMT

How to make a successful plugin from scratch as a solo developer
After my well-received appearance on last year's panel on starting your first audio business, I'm resubmitting my talk proposal on the success story of the CrispyTuner, explaining to aspiring indie developers how it's possible to make a successful audio plugin from start to finish. The goal is to inspire aspiring developers by showing real challenges I faced and how I overcame them.

Speakers
avatar for Marius Metzger

Marius Metzger

Entrepeneur, CrispyTuner
My name is Marius, I'm 24 years old and have a passion for product design, leadership, and, of course, software development.After finishing school at 16 years of age, I got right into freelance software development, with Google as one of my first clients.In 2020, I released a pitch... Read More →


Wednesday November 15, 2023 14:00 - 14:50 GMT
Track 1, Auditorium

14:00 GMT

Three RADical ideas in the art of coding and the coding of art.
What if MIDI was a programming language?
What if C++ had built-in audio semantics?
What if you could develop C++ plugins, live in the DAW?

This talk explores these ideas and the development of new technologies designed to blur the lines between music and code, for both artists and developers, and challenge traditional ways of thinking and working.

Drawing on concepts of flow, liveness, and rapid prototyping, the talk will present live demos, and discuss the development of:

Manhattan - a digital audio workstation and embeddable API built on a procedural music engine that integrates sequencing and programming. Used by artists, game composers, and in teaching computational thinking, example applications include crowd-driven music using machine vision, a Unity mini-game featuring a live (and somewhat mortal) orchestra, plus a growing library of famous works recomposed as code that shows the power of modelling music as both pattern and process.

Klang - an open C++ dialect (language extension) for audio, using modern language features (C++14/17) to extend the semantics of C++ to encapsulate audio, providing DSP primitives and types, and adapting the STL's concept of stream objects and operators (e.g. >>) to represent signals. Easier to read, more concise, and easily mapped to visual forms (block diagrams, Max), Klang feels like a new language (in the spirit of SOUL) but, as pure C++, retains the performance, portability, compatibility, and interoperability of the industry standard.

rapIDE - a C++ IDE inside a DAW plugin, designed for rapid audio prototyping and development of synthesisers and effects. Built on a full clang/LLVM-based toolchain, the plugin's source code can be live edited, rebuilt, reloaded and auditioned without restarting the DAW (or stopping playback). Compatible with C++ and Klang, rapIDE is designed to improve the accessibility, liveness, and immersion of audio programming, for applications in rapid prototyping and teaching, featuring realtime debugging, auto-complete, code sandboxing, and built-in audio analysis.

These technologies will support the new Music Systems Engineering (MuSE) degree programme, in development by Point Blank Music School in collaboration with industry, for launch in 2024.

Speakers
avatar for Chris Nash

Chris Nash

Founder, nash.audio
Chris Nash is a software developer, composer, educator and researcher in things that go beep in the night. Following a PhD on music software design at Cambridge, he has worked on technology and music projects across academia and industry, including for the BBC, Steinberg/Yamaha, and... Read More →


Wednesday November 15, 2023 14:00 - 14:50 GMT
Track 2, Lower River Room

14:00 GMT

Translating research into practice: an exploration of Antiderivative Antialiasing (ADAA) for wavetable synthesis
Anti-aliasing is a crucial consideration for digital audio synthesis. Usually, for an oscillator, techniques like band-limited signals or oversampling are employed to mitigate this problem, but I investigated a method a bit more recent : Antiderivative Anti-Aliasing (ADAA). My search for a practical ADAA application in wavetable synthesis first yielded limited results. However, a paper titled "Antiderivative Antialiasing for Arbitrary Waveform Generation," published in August 2022, caught my attention.

The presentation will focus on three aspects :
  • An Introduction to ADAA and the algorithm itself
  • Insights into practical implementation and results
  • Reflections on engaging with Academic Research

By the end of the talk the listener will know about the pros and cons of this technique and how and when to employ it. Furthermore, we will have illustrated some challenges of working with academic material as a software developer.

Speakers
avatar for Maxime Coutant

Maxime Coutant

Audio Software Engineer, ADASP Group (LTCI)
I'm an audio software engineer in the ADASP group, part of the LTCI public laboratory. Audio enthusiast, hobbyist musician and software addict, I love to share, learn and meet new people !Here at ADC23 I'll present a project I spent many hours on during this last year, hoping to lower... Read More →


Wednesday November 15, 2023 14:00 - 14:50 GMT
Track 3, Newgate

14:00 GMT

Wait-free thread synchronisation with a SeqLock
When developing real-time audio processing applications in C++, the following problem arises almost inevitably: how can we share data between the real-time audio thread and the other threads (such as a GUI thread) in a way that is real-time safe? How can we synchronise reads and writes to C++ objects across threads, and manage the lifetime of these objects, while remaining wait-free on the real-time thread?

This talk is the second in a series of talks about thread synchronisation in a real-time context. In the last episode, we focused on the case where the real-time thread needs to read a sufficiently large, persistent object that is simultaneously mutated on another thread. In this episode, we focus on the reverse case: the real-time thread needs to write the value while remaining wait-free, and while other (non-real-time) threads are reading it.

The traditional solution for this problem in audio processing code today is double buffering. This strategy works well in certain cases, but like every algorithm it has certain tradeoffs. If we look beyond the audio industry, it turns out there is actually another strategy that has more favourable tradeoffs for some use cases: the SeqLock.

We describe the general idea of the SeqLock, discuss the different parts of the algorithm, and show a working reference implementation. It turns out that in order to implement a SeqLock portably and without introducing undefined behaviour, we need to reconcile the algorithm with the C++ memory model, which presents an interesting challenge. In order to make it work and be efficient, we need to be very careful with our use of memory fences, memory ordering, and atomic vs. non-atomic memory accesses. Along the way we will learn useful things about writing lock-free code in Standard C++.

Finally, we compare the tradeoffs between SeqLock and other approaches to this problem, offer some guidelines on which approach to use when, and present a proposal to add the SeqLock algorithm to the C++ Standard.

Speakers
avatar for Timur Doumler

Timur Doumler

Independent, Independent
Timur Doumler is the co-host of CppCast and an active member of the ISO C++ standard committee, where he is currently co-chair of SG21, the Contracts study group. Timur started his journey into C++ in computational astrophysics, where he was working on cosmological simulations. He... Read More →


Wednesday November 15, 2023 14:00 - 14:50 GMT
Track 4, Aldgate

15:00 GMT

A comparison of virtual analog modelling techniques (part 2)
This talk will explore the spectrum of virtual analog modelling techniques including traditional methods (modified nodal analysis, wave digital filters), single-architecture neural network models, and grey-box methods that incorporate both physical modelling and machine learning techniques. Several models of the gain stage from the Boss DS-1 guitar pedal will be provided as a motivating example. The talk will discuss how these methods can generalize over a wide range of circuits, as well as the specific problems that users of each modelling technique can expect to see for different types of circuits.

Speakers

Wednesday November 15, 2023 15:00 - 15:50 GMT
Track 2, Lower River Room

15:00 GMT

Deep learning for DSP engineers: challenges and tricks to be productive with AI in real-time audio
This talk aims to tackle and demystify the process of the development of an AI-based musical instrument, audio tool or effect. We want to view this process not from the point of view of technical frameworks and technical challenges, but from that of the design process, the knowledge required and the learning curve needed to be productive with AI tools; particularly if one approaches AI from an audio DSP background, which was our situation when we started out.

We are going to quickly survey the current applications of AI for real-time music making, and reflect on the challenges that we found, especially with current learning resources. We will then walk through the process of developing a real-time audio model based on deep learning, from dataset to deployment, highlighting the relevant aspects for those with a DSP background. Finally, we will describe how we applied that process to our own PhD projects, the HITar and the Bessel’s Trick.

Speakers
AM

Andrea Martelloni

Student, Queen Mary University of London
FC

Franco Caspe

Student, Queen Mary University of London


Wednesday November 15, 2023 15:00 - 15:50 GMT
Track 1, Auditorium

15:00 GMT

Developing SpatialAudioKit for the Apple ecosystem
SpatialAudioKit is a new high-level Spatial Audio framework for Apple platforms. It is designed to simplify working with technology such as Ambisonics on iOS, macOS and visionOS. This talk will introduce the framework that has been developed as the engine for an app, which will also be presented.

As a C++ engineer by trade, this is my first deep foray into the modern Apple developer ecosystem and the Swift programming language and I've tried to utilize the latest and greatest Apple technologies. The talk will give an overview of those technologies, the technical challenges that have been encountered and the solutions found during the creation of SpatialAudioKit and the app that it powers.

Speakers
avatar for Oliver Larkin

Oliver Larkin

Software Engineer, Oli Larkin Plug-ins/Ableton AG
I'm a software engineer at Ableton, but at ADC23 I am presenting one of my many personal audio programming projects.I am the main developer of the iPlug2 C++ audio plug-in framework, creator of VirtualCZ and several other audio plug-ins and apps. I've previously worked with Arturia... Read More →


Wednesday November 15, 2023 15:00 - 15:50 GMT
Track 3, Newgate

15:00 GMT

Spectral audio modeling: why did it evolve and do we need it now?
We will trace selected developments in spectral audio signal processing over the past century or so at Bell Labs, CCRMA, and elsewhere. The topic arguably started with the evolution of hearing, and our ears still feed spectral decompositions to the brain.  In machine learning, on the other hand, spectral presentations are often being skipped in favor of time-domain waveform encodings.  Is spectral audio signal processing dead?  Arguments will be made for keeping it. One clue is the lack of vestigial inner ears after the appearance of large brains.

Speakers
avatar for Julius Smith

Julius Smith

Professor Emeritus, Music & b/c EE, CCRMA
Julius O. Smith is a research engineer, educator, and musician devoted primarily to developing new technologies for music and audio signal processing.  He received the B.S.E.E. degree from Rice University in 1975 (Control, Circuits, and Communication), and the M.S. and Ph.D. degrees... Read More →


Wednesday November 15, 2023 15:00 - 15:50 GMT
Track 4, Aldgate

15:50 GMT

Break
Wednesday November 15, 2023 15:50 - 16:20 GMT

16:20 GMT

Sponsored Talk TBA
Wednesday November 15, 2023 16:20 - 16:50 GMT
Track 1, Auditorium

16:20 GMT

Sponsored Talk TBA
Wednesday November 15, 2023 16:20 - 16:50 GMT
Track 4, Aldgate

16:20 GMT

Sponsored Talk TBA
Wednesday November 15, 2023 16:20 - 16:50 GMT
Track 2, Lower River Room

16:20 GMT

Sponsored Talk TBA
Wednesday November 15, 2023 16:20 - 16:50 GMT
Track 3, Newgate

17:00 GMT

KEYNOTE: Commercialisation of audio technology
Innovation is rampant in audio technology. New signal processing and machine learning solutions are emerging on an almost daily basis, and experimenting with audio tools frequently yields new creative approaches. However, bringing such innovation to market poses many challenges. This talk addresses these challenges while drawing on experience with several high-tech audio start-ups. It focuses on questions and dilemmas concerning, for instance, IP protection, investment, market size and potential, and early-stage growth that are specific to the audio industry. Concrete examples are given of successes and failures where audio developers have attempted to bring new technologies to market.


Speakers
avatar for Joshua Reiss

Joshua Reiss

Professor, Queen Mary University of London
Josh Reiss is Professor of Audio Engineering with the Centre for Digital Music at Queen Mary University of London. He has published more than 200 scientific papers (including over 50 in premier journals and 6 best paper awards) and co-authored two books. His research has been featured... Read More →


Wednesday November 15, 2023 17:00 - 18:00 GMT
Track 1, Auditorium

18:00 GMT

Closing Address
Wednesday November 15, 2023 18:00 - 18:15 GMT
Track 1, Auditorium

18:15 GMT

Evening Meal & Networking
Wednesday November 15, 2023 18:15 - 19:30 GMT

19:30 GMT

Open Mic Night
The ADC Open Mic Night is back! A fun, informal evening with lightning talks, music performances, and some impromptu standup comedy.

If you are attending the ADC on site, you can contribute to the Open Mic night with a 5 minute talk or performance! Please use the sign up form here.

This is an event exclusively for on-site attendees. It won't be recorded, published, or streamed online.

Wednesday November 15, 2023 19:30 - 21:00 GMT
Track 1, Auditorium

21:00 GMT

Networking
Wednesday November 15, 2023 21:00 - 22:00 GMT
 
  • Timezone
  • Filter By Date Audio Developer Conference Nov 13 -15, 2023
  • Filter By Venue London, UK
  • Filter By Type
  • In-Person & Online
  • In-Person & Online - Remote Speaker
  • In-Person Only
  • Online Only
  • Workshop
  • Audience


Filter sessions
Apply filters to sessions.