Factorsynth For Max For Live Uses Machine Learning To Create New Sounds!

SYNTH ANATOMY uses affiliation & partner programs (big red buttons) to finance a part of the activity. If you use these, you support the website. Thanks! 

Factorsynth Synthesizer for Max For Live use machine learning to deconstruct and reconstruct your samples in new elements and to create new interesting sounds.

J.J. Burred has released recently Factorysynth, a Synthesizer for Max For Live that use machine learning technologies to create new sounds. In this procedure, the M4L device deconstruct and reconstruct your samples in a new interesting way. It transforms your audio input in a set of temporal and spectral elements which can rearrange later in new sounds with new textures.

Overview

Factorsynth is a Max For Live device created by J.J. Burred that uses machine learning to decompose sounds into sets of elements. Once these elements have been obtained, you can modify and rearrange them to remix existing clips, remove notes, randomize patterns, and create complex textures with only a few clicks. Unlike traditional audio effect devices, which take the track’s audio input and generate output in real time, Factorsynth is a clip-based device.

It works on audio clips from your Live set that you have selected and loaded into Factorsynth. Once an audio clip has been selected and loaded into Factorsynth, it can then be decomposed into elements. The decomposition process is called factorization, because it is based on a technique called matrix factorization.

Factorsynth is available now for 49€ and it currently only supports Mac. A Windows version is planned for release in the second half of 2018.

More information here: Factorsynth

Be the first to comment

Leave a Reply

Your email address will not be published.


*