Software - The Live Modular Instrument:



Over the past decade-plus I have been developing a software instrument, written in SuperCollider, for live performance with instrumentalists. This software is the main focus of my research, and almost every piece of electronic music found on my website uses it in some way. I use it to perform composed and improvised music with groups like Wet Ink Ensemble, ICE, The Evan Parker Electro-Acoustic Ensemble, and The Peter Evans Quintet. The unique non-linear design of my software helps me achieve a unique versatility as a performer, able to approach any musical situation with attentive sensitivity, and lead and follow in any group.

The source code for my project can be viewed on its GitHub page by following the link below:

Live Modular Instrument GitHub Page



Writing - Laptop Improvisation in a Multi-Dimensional Space:



The design philosophy for the Live Modular Instrument is outlined in my doctoral dissertation, Laptop Improvisation in a Multi-Dimensional Space, which can be found on Columbia's Academic Commons website:

Laptop Improvisation in a Multi-Dimensional Space

ABSTRACT:

Using information theory as a foundation, this paper defines virtuosity in the context of laptop performance, outlines a number of challenges that face laptop performers and software designers, and provides solutions that have been implemented in the author's own software environment. A summary of the argument is that by creating a multi-dimensional environment of Sonic Vector Spaces (see page 17) and implementing a method for quickly traversing that environment, a performer is able to create enough information flow to achieve laptop virtuosity. At the same time, traversing this multi-dimensional environment produces a perceptible sonic language that can add structural signposts for the listener to latch on to in performance. Specifics of the author's personal approach to this problem, a software environment coded in SuperCollider, are then shared. Lastly, Mihály Csíkszentmihályi's concept of flow psychology is applied to the three stages of creation in the laptop performance process - software design, patch design, and performance.



A couple of years ago, I also made a short video outlining the main design features of the software. Check it out below:





Software - The NessStretch (with Alex Ness):



Implements a phase randomized rfft time stretch algorithm, the NessStretch, which splits the original sound file into 9 discrete frequency bands, and uses a decreasing frame size to correspond to increasing frequency. Starting with a largest frame of 65536, the algorithm will use the following frequency band/frame size breakdown (assuming 44100 Hz input):

0-86 Hz : 65536 frames, 86-172 : 32768, 172-344 : 16384, 344-689 : 8192, 689-1378 : 4096, 1378-2756 : 2048, 2756-5512 : 1024, 5512-11025 : 512, 11025-22050 : 256.

The NessStretch is a refinement of Paul Nasca's excellent PaulStretch algorithm. PaulStretch uses a single frame size throughout the entire frequency range. The NessStretch's layered analysis bands are a better match for human frequency perception, and do a better job of resolving shorter, noisier high-frequency sounds (sibilance, snares, etc.).

Implemented in Rust, python, and SuperCollider. The source code for our project can be viewed on its GitHub page by following the link below:

NessStretch GitHub Page

Our ICMC paper is here:

The NessStretch: This Paul Stretch Goes to 9





Multi-mapped Neural Networks for Control of High Dimensional Synthesis Systems



This paper outlines NN Synths 1, a software instrument that uses multi-mapped regression-based deep learning neural networks2 to control mul- tiple high dimensional synthesizers. The paper discusses the reasoning behind the use of high-dimensional synthesizer algorithms and then presents the designs of two individual software synthesizers at use in the NN Synths instrument. It then outlines the larger ecosystem of the multi-mapped performance space: show- ing why the archipelagic nature of these synthesizers requires the user to have rapid access to multiple different mappings for expressive performance, and how easy switching between multi-mapped synths facilitates expressive traversal of the larger multi-dimensional performance space.

Link to the Paper: Multi-mapped Neural Networks for Control of High Dimensional Synthesis Systems

Video of the Paper Presentation:



Talk - Software Musical Instrument Design for Machine Learning and Machine Listening



This talk outlines my Neural Network Synthesizer, the early version of which is linked to below.





Software - NN_Synth_1:



NN_Synth_1 is cross-feedback synthesis engine built in SuperCollider which originally used a keras/tensorflow neural network (it now uses native FluCoMa (www.flucoma.org) UGens in SuperCollider) to map the four dimensional vector of two x-y pads to a 16 dimensional vector of the synthesis engine. The model loads six simultaneous neural networks, which each give specific mappings of the data from the x-y controls to the synth. The user can quickly switch between active neural nets and also make their own mappings by training each NN in the system individually.

The source code for my the original project can be viewed on its GitHub page by following the link below:

NN_Synth_1 GitHub Page

The current version of this software is embedded in my Live Modular Instrument (see above).

In the summer of 2020, I gave this talk at the FluCoMa Plenary. Though I abandoned the sample-player version of the NN Synths, I think it is novel and an interesting watch:



Here is my 2021 FluCoMa Plenary talk, discussing my 50 dimensional DX7-based Neural Net controlled synth:




Software - Maths:



A Faust-based emulation of the popular Eurorack module. The is my only substantial project in Faust, which is an amazing programming language for low-level dsp development. The SuperCollider build is provided, but Faust makes it easy to build this in PD, Max, or almost any other language.

The source code for my project can be viewed on its GitHub page by following the link below:

Maths GitHub Page



Software - Convolve:



Sometimes you just want to convolve two audio files! The Convolve quark does just that, returning a buffer as a result of the operation.

The source code for my project can be viewed on its GitHub page by following the link below:

Convolve GitHub Page



Software - FluidNMFStretch:



A Faust-based emulation of the popular Eurorack module. The is my only substantial project in Faust, which is an amazing programming language for low-level dsp development.

The source code for my project can be viewed on its GitHub page by following the link below:

FluidNMFStretch GitHub Page



Talk - Analog Approaches to Digital Instrument Design



This talk from the Flucoma plenary in 2018 in Huddersfield, UK outlines a digital software instrument based on an audio feedback system that was originally (and more intuitively) made on a modular analog synthesizer.





Interview - Creativity Conversation: Sam Pluta & Peter Evans



This is a preconcert talk recorded at Emory University in the spring of 2018. We are joined onstage by Emory University faculty members Dwight Andrews and Adam Mirza. This conversations delves deep into the importance of history, technique, and spirituality in improvisational practice.





Writing - Maximize Information Flow: How to Make Successful Live Electronic Music:



In 2008 I wrote an article in New Music Box that was, in many ways, a predecessor to my dissertation. Entitled "Maximize Information Flow: How to Make Successful Live Electronic Music", this article discusses human-computer interaction and its relationship to perception of audio-visual streams of information. The article is still up for reading on New Music Box:

Maximize Information Flow: How to Make Successful Live Electronic Music



Software - PV_Control:



Two of my works, the composition/installation hybrid work Broken Symmetries and the installation American Idols, use software controlled feedback as their compositional basis. I originally wrote the software for these works in SuperCollider, but due to efficiency considerations, I decided to make a C++ plug-in of the algorithm. Please find the code and Mac OS compiled plugin here:

PV_Control GitHub Page



Writing - Interview with Sound American:



Over the past couple of years I have had a number of interviews with various online magazines. A couple of years ago, I had a particularly excellent talk about running Carrier Records with Nate Wooley of Sound American. That article can be found here:

SA10: Sam Pluta and Carrier Records



Writing - Musicomputation: Teaching Computer Science to Teenage Musicians:



In 2008 I was part of an incredible summer program whose goal was to teach computer programming to young musicians. The three week course used Processing as its programming language, and based its teaching of data structures, iteration, looping, etc on music. A highlight of the course for me was deconstructing then reconstructing Morton Feldman's Triadic Memories using a Finite State Machine. The paper outlining the goals and acheivements of our course is found here:

Musicomputation: Teaching Computer Science to Teenage Musicians