Branching Out Secret Mixter
skip
Home » Forums » The Big OT » Time Stretching - Not so different

Time Stretching - Not so different

victor
.
permalink   Thu, Oct 30, 2008 @ 11:41 AM
Can you hear the difference between the time stretching algorithms used in say ACID vs. FL Studio vs. Ableton? Really? Because it turns out they (and about 20 other DAWs) are all using the same one.

Or at least they license the same one.

I should mention that just because someone (like Sony) licenses some software (like elastique) doesn’t mean they actually use it. Even if they do, many of these DAWs have multiple stretching algorithms available per clip/sample and without further investigation it’s impossible to tell which setting triggers elastique vs. some other stretching software they’ve licensed or written.

Still, you’re not crazy if you think stretching algorithms stalled about ACID 2.0.
spinmeister
.
permalink   Thu, Oct 30, 2008 @ 1:34 PM
I think quite a bit has happened in time stretching over the last few years.

I can mostly speak from experience with Cubase (from 5.1 VST through the SX series to the current Cubase 4) and from using a few other pieces of software including Celemony Melodyne and Antares Auto-tune.


In Cubase for example, lately there are different types of algorithms available, typically falling into 3 different categories:

A - optimized for speed (low CPU processing power required); these algorithms are typically very good for quick previewing and real-time processing, but of rather low quality for material that’s typically difficult to stretch (vocals, polyphonic materials, full mixes). These may not have changed that much over the last few years. It is this category that may have experienced little change since ACID 2.0


B - good enough for much solo ( non polyphonic materials) - these algorithms tend to be of a bit more medium quality. On modern fast machines some of these can be used real-time. Often Good enough for much non-polyphonic material.

C - highest quality and more serious CPU consumption. Typically can’t be applied in real-time even on rather fast modern machines. But some impressive results can be achieved even on complex polyphonic material. In this category, I know the MPEX algorithms have improved significantly over the years (at least to my ears!)

And then there’s the stuff that’s highly optimized for vocals like Celemony Melodyne and Antares Autotune which can achieve some pretty astounding results with vocals (and some other stuff, too) both of those can do serious time stretching work, too.

Finally, I want to give a special mention to Paul’s Extreme Sound Stretch, a standalone free piece of software, which can do some pretty amazing extreme stretches as showcased by synthetic1. I think it’s Windows only, but the source code is open source.
 
.
permalink   essesq Thu, Oct 30, 2008 @ 2:42 PM
I too have used Paul’s Extreme sound stretch elsewhere and even here. It yields some pretty neat results. And the shout out definitely goes to Synthetic for starting the time stretching mania what seems like a million years ago :-).

As for the rest of that posting above, I’ll leave that for you gear heads to translate :-).
 
.
permalink   Subliminal Thu, Oct 30, 2008 @ 11:40 PM
Quote: essesqI too have used Paul’s Extreme sound stretch elsewhere and even here. It yields some pretty neat results. And the shout out definitely goes to Synthetic for starting the time stretching mania what seems like a million years ago :-).

I guess you mean here. ;-) (I am going to check it out this evening.)

Anyway, another satisfied user of Paul’s Extreme Sound Stretch Tool. I used it a couple of times and have used it on a flute solo in a piece I (and hopefully FB) am working on at the moment. I don’t know if it was Synthetic who inspired me to check it out or someone who in turn was influenced by Synthetic. A great tool that does not work on every source (as I experienced last night), but that is probably a good thing, because then it would be too easy.
 
.
permalink   victor Thu, Oct 30, 2008 @ 11:57 PM
great summary (!)

still, I wonder how many actual different pieces of code are doing this. iow, I figured there are shared algorithms (there really are only so many known mathematical formulas in signal processing that apply to the problem) but I didn’t realize that all those apps were actually running the exact same code in so many cases.

It’s sort of like when I discovered that all the audio interfaces out there were all licensing one of three A/D converters in the commercial audio world (sorry no link ready so feel free to call BS)