Radio shows, podcasts and DJing have their history in selecting and controlling the macroscopic level of a sequence of tracks, their EQs and levels.
More modern approaches show a trend towards controlling more microscopic elements, such as: Loops, samples and live elements. We are trying to go one step further.
With Corpus-Based Concatenate Synthesis (CBCS) we are controlling more microscopic elements of previously finished tracks – like the whole sample-base of which it was constructed from and the underlying selection, based on high-level audio-features. By this method we explore swapping out the elementary atoms to create something new while maintaining the macroscopic structure and dramaturgic development of the source.
We define a target audio file, which is then segmented and it’s audio features get extracted:
2. Source (Corpus):
Then, we define a source (corpus) that is also segmented and gets it’s features extracted:
3. Target Matching (Concatenation):
We then match audio segments of the target with their most similar features. Which features will be used and how heavy they get weighted is up to the users specification. If I now try to target the “Amen” break (above) with the stone Audio sample (above), we get the following target-matched Amen break, reconstructed through the stone samples:
If we swap the corpus for a timpani recording, it sounds like this. First the source, then the result – the target stays the same amen break from above:
We can also define multiple sample groups, tracks or entire directories as the corpus. Here is the same amen-target played by both the timpani and the stones for example:
Corpus-based concatenative synthesis (CBCS) allows to create new musical structures from thousands of sound snippets. Thanks to a pertinent description of the timbral characteristics of each segment of the corpus, composition or instrumental play becomes a navigation through the multi-dimensional space of sound characters. If this navigation is controlled by gesture sensors or tangible objects, CBCS becomes a true instrument with which electronic musicians can reconquer the directness, corporality and expressivity lost in many laptop performances.
This project was developed together with my colleague Sebastian Wolf and supervised by Diemo Schwarz. Source Swap currently uses the audioguide tool by Ben Hackbarth.