Background

We think a crucial part of the musical experience is the instant feedback loop between a musician’s hands and their ears; this is how music is made. The challenge with machine learning algorithms is that they all work asynchronously: input something, wait, get output. This is because most algorithms are optimized for large asynchronous batches, not low-latency throughput.

Machine learning algorithms like ours require a software stack which can be unwieldy to get up and running. Instead of creating a VST or standalone software and relying on the user to have a computer built for machine learning with a specific graphics card, CUDA version and drivers installed, we packaged the entire stack in a little box. No install or updates necessary.

We designed our models to be as close to real-time as possible, and put the results into a piece of dedicated hardware to open up new forms of expression with machine learning. The result is unlike any other instrument.

Specs

At the core of the processor is our style-transfer algorithm trained on vocal and instrumental recordings. These models have the capability to turn one instrument into another in near real-time. For example, a guitarist could play their guitar into the hardware and the output could sound like a violin, or a singer could sing into the hardwate and it could produce the sound of a Gamelan Orchestra.

The hardware is a single-tasker with no extra features. It has a TRS/XLR combo input and a TRS output. Currently we have a handful of models which come preloaded on the hardware. Simply select your model and start playing.

Currently the hardware is not for sale. We have a handful of prototypes which we are actively developing. If you have a specific project or need for one of these instruments, please fill out our form and get in touch.

Request Beta Access launch