What's new

Quantizing & Negative Track Delays Explained

This is a great video and I really love the idea of working like this, in theory, but I'm finding a few things confusing in practice.

1. This only works if you draw in the notes on a grid, right? It doesn't work if you're an actual keyboard player, because how in the world do you even deal with something like CSS legato with the crazy delays if you're trying to play a line in in real time? My current solution is to play the line on the track with the Sustain articulation and then transfer the MIDI to the track that contains the legato articulation with a different negative track delay. But...

2. The CSS legato delays are velocity dependent. And, these aren't trivial changes in delay time. The lowest velocities are 333ms, the middle is 250ms and the fast is 100ms and they're just instant changes, depending on what velocity you hit. How do you deal with that? Do you actually have three different tracks for different velocity legatos just to have three different delay offsets? And, you may want to have different legato types within one phrase. I don't see any way to avoid moving individual notes in this case. And, as you point out, you're already moving the initial notes in every phrase, because they aren't delayed as much as the legato notes, so the note moving is already happening in every phrase. Ugh. So, as nice as the negative track delay solution is, it still isn't any kind of "set it and forget it" kind of system. BTW, I'm not criticizing anything about your advice. It's great and you present it in a fun and clear manner. This is just the state of affairs in our modern sampling world, I guess.

I'm still a relative newbie compared to many of the people on this forum, so I learn a lot, especially from the pros around here who have so much experience. When I first started making a template, I tried to use an articulation map for every instrument that allowed me to get all the articulations on one track because I thought that seemed most analogous to the real world. One instrument could do all these things at any given time, so I should be able to mimic that. But, for my workflow, I quickly found that it was better to separate some things onto different tracks. Now, I'm thinking of splitting more things out, but I don't want this to get ridiculous. I do like to see things in notation at times, so I also want to keep the parts close enough to the grid to get a reasonable transcription, so track delays become invaluable in that situation. But I find that I'm always trying to find the balance between being able to play parts in musically, with some expression, while also being able to organize things logically without having the confusion and clutter of zillions of tracks for every little nuance. I suppose everyone is going through all these same problems, but I just want to make sure I'm not missing any obvious solutions.
 
I personally like to split everything out, but for folks that don't, I've seen them just split out the legato from everything else, since the legatos often have a much longer negative delay.
 
1. This only works if you draw in the notes on a grid, right?
You can play in the notes and then just quantize them to the grid.

The CSS legato delays are velocity dependent. And, these aren't trivial changes in delay time. The lowest velocities are 333ms, the middle is 250ms and the fast is 100ms and they're just instant changes, depending on what velocity you hit. How do you deal with that?
The new CSS update has a new standardized legato mode of 150ms and that makes things much easier. However, for the "expressive" legato mode, all you have to do is use a simple key command to change all the notes to the same velocity so that they're a consistent delay time. And even better, I use a Cubase MIDI transform on the track which automatically makes all notes played the velocity I want. So, for expressive, I make sure it's the velocity that will make all notes 250ms in time. So, yes, two sets of tracks, one for -150 and the other for -220, and it's not really a big deal to do this. I have never had the need to have two legato speeds within a single phrase.

But, for my workflow, I quickly found that it was better to separate some things onto different tracks.
There's no right or wrong, there are pros and cons to both methods, but separate tracks works best for me, and I've now delivered stuff to an orchestrator and played by a live orchestra and it worked great.
 
By the way, I think having marcato long and shorts, portatos, spiccatos and staccatos, pizz, etc all on separate tracks has led me to use those articulations a lot more in my music than if I just had a "Violins 1" track. It leads to a lot more experimentation for me. Seriously, marcatos and portatos - there's a lot of magic in those articulations that people overlook.
 
I remember the first time I got hold of a portato articulation (Berlin Inspire 2 I think) - lovely. I think I'll go fire up now...
 
There's no right or wrong, there are pros and cons to both methods, but separate tracks works best for me, and I've now delivered stuff to an orchestrator and played by a live orchestra and it worked great.
Yes, of course! There's no question that you know what you're doing and have a method that works well for you. I totally respect your opinion on these things, which is why I posed the question here in the first place. I'm the guy who's still trying to figure out what works for me.

The quantizing after you play problem is that some of these articulations are so slow, that the quantization will be a whole 16th too early because I naturally start playing that far in front of the beat when I have a sound with a slow attack. But, these kinds of situations are unusual. Normally, quantization works fine.

I have macros so I can grab a bunch of notes and change the articulations on the fly from the computer keyboard. But, in some cases, that only goes so far, so I'm splitting things up when that isn't working for me. Legato transitions seem to need the most negative track delay, so those definitely need a separate track(s) if I'm ever going to be able to have useful notation. I've been using the low latency legato in CSS, but that's a relatively new feature, so I wondered what everyone did before that. Can this new legato mode pretty much replace the original legato mode? I'm new to CSS and this is the first time I've had a library with such a long delay. But, I recently picked up Sonicxinema Intimate Legato Cello, which is beautiful, but also has a pretty long legato delay. But, both of these libraries also have nice legato transitions compared to the string libraries I used before, so there's a payoff for the extra hassle. I just need to figure out the best way to deal with libraries that have this issue.

Thanks to everyone for the helpful replies. I know there's no right or wrong way to do this. But, it's always good to get different points of view since it gives me different possibilites to try out.
 
The different legato speeds were the reason I didn't use CSS legatos. Now (after 1.7.1) I use them from time to time. I have "longs", "legatos", and "one shot's" on separate tracks. Actually the tracks are just copies of the full library on each track and I just key switch them the way I like.
For shorts I like to play parts with one articulation and later change the articulations in the editor. Feels less stitched together than using different tracks for every articulation to me. And I love the new Legato Marcato Overlay. It works like a complete library without key switching.

I often copy notes from a piano sketch into the instrument tracks and edit them depending on the instrument (velocity, length, overlaps and CC). For this workflow the negative track delay is perfect.
 
Thanks, Saxer. I think you kind of nailed the dilemma for me. I think I'm coming from the live keyboard playing mindset and that's what's throwing me. I want to play the line in the way I intend it to sound in real time. I've been doing that most of my life, so that feels natural to me. This is something I love about the Infinite Brass/WW libraries. But, the approach with switches for different articulations is equally valid and even necessary, since the keyboard is a poor substitute controller if you want to be able to recreate the variety of things a string player can do.

I think I'll probably adopt a method more like what you're doing. For instance, use the non-legato sustain sound for long note phrases, since it has low latency, play in the line and then move it to the desired legato articulation lane with the correct negative track delay and do the final editing there. But, I'm going to try to split out the articulations only when I have to, since I'm OK with using articulation maps to switch between articulations as long as there aren't big timing issues.

I'm sort of in awe of how much David splits everything out. It does encourage a kind of microscopic view of each note in a phrase which scares me a little, maybe just because it's so different from the way I do things now. OTOH, his mockups sound great and he knows way more about these things than I do, so I may wind up going that direction as I get more experience.

Thanks again for all the great input!
 
I'm sort of in awe of how much David splits everything out. It does encourage a kind of microscopic view of each note in a phrase which scares me a little, maybe just because it's so different from the way I do things now. OTOH, his mockups sound great and he knows way more about these things than I do, so I may wind up going that direction as I get more experience.
The main reason I don’t use keyswitches is because they always manage to get messed up at some point. If I forget to record it, or record it too early…also since I started doing film scoring, I’m copying/pasting a lot more and keyswitches get lost or chopped off. The last thing you want when you’re printing stems at 3am is realizing something played a marcato instead of a staccato and you have to re-do it again.

With the separate track setup I know everything’s going to play back properly every time no matter what. And while expression maps can help solve this as opposed to keyswitches, you can’t set different delay times per articulation. This might work for some patches though, if the delay times are all the same for the shorts for example.

Anyway, for me right now, separate tracks is the most sure-fire way, although it is more work programming it initially. I play everything in with a staccato and then drag the notes that are supposed to be marcato or Staccatissimo to those tracks. You can see this in the video of brass demo for Ark 5, in the brass section.
 
Yes, the part where you're actually doing film scoring and I'm not explains why I have the luxury of horsing around with different workflows and you have to have one that's rock solid! I'm trying to not be overly embarrassed about that....

Thanks again for taking time out to explain why you work this way. It makes a lot of sense. I'm already completely on board with this idea of picking an articulation to play in a part and then moving that take to a different articulation depending on what result I'm after. This is great!
 
The main reason I don’t use keyswitches is because they always manage to get messed up at some point. If I forget to record it, or record it too early…also since I started doing film scoring, I’m copying/pasting a lot more and keyswitches get lost or chopped off. The last thing you want when you’re printing stems at 3am is realizing something played a marcato instead of a staccato and you have to re-do it again.

With the separate track setup I know everything’s going to play back properly every time no matter what. And while expression maps can help solve this as opposed to keyswitches, you can’t set different delay times per articulation. This might work for some patches though, if the delay times are all the same for the shorts for example.

Anyway, for me right now, separate tracks is the most sure-fire way, although it is more work programming it initially. I play everything in with a staccato and then drag the notes that are supposed to be marcato or Staccatissimo to those tracks. You can see this in the video of brass demo for Ark 5, in the brass section.
I adopted your method and split all my articulations in my entire template. I have some ensemble patches for sketching. Do you find writing with staccato easier than longs?
 
I adopted your method and split all my articulations in my entire template. I have some ensemble patches for sketching. Do you find writing with staccato easier than longs?
I suppose that depends, sometimes I sketch on piano, but it could also be string sustains, or staccatissimo, or an orchestra tutti patch.
 
Some of you know about the negative track delay database I started. I decided to make a companion video explaining quantizing and negative track delays because it has massively sped up my workflow and is something I rarely see talked about. I hope you find this helpful.


Many thanks for the video! Long story short, I did not want to give up using an articulation manager, but do see the importance of negative delays and on-grid. So I came up with a way to do both.

I did it by routing the single midi track that uses the articulation manager to a set of corresponding VSTi articulation rendering tracks that each has its own negative delay. So a one to many approach.

In this case using Reaper and Tack's excellent Reaticulate articulation manager. I set the Reaticulate bank definition file to define each articulation to output a specific midi buss/channel. In Reaper I routed the midi track's output to each VSTi articulation rendering track using the specific midi buss/channel I defined in the Reaticulate file.

Since Reaper is able to limit midi buss scope, this allows midi buss 1 and its 16 channels to be used over and over again for each instrument's midi track routed to its specific VSTi articulation tracks. Just use buss 2 and its 16 channels in addition when more than 16 articulations for an instrument. This capability allows Reaticulate to continue cloning instruments in the config file, even though the midi buss/channel definitions get cloned to, yet no midi conflicts occur on simultaneous playback.

I just finished setting up BBCSO Pro strings and testing it and it works great. On my i9 13900K I can play the entire strings/leaders with all 200 something instances of BBCSO active and preloaded in to memory and its running about 17% cpu and maybe 4 GB or so memory used for samples. So this machine will be able to play the entire orchestra in real time with cpu and memory headroom to spare.

So far, its looking like a case of having your cake and eating it too. Articulation manager on a single midi track per instrument, plus individually set negative delays on the VSTi articulation rendering tracks per articulation. This approach should work with any DAW that supports midi busses and an articulation manager that can allow user to define access to those buss/channels.

Another benefit is that all of the midi composition can happen in one track folder with its subfolders for sections and instrument tracks, while all of the audio rendering can happen in another track folder with all of its subfolders for sections and instruments and articulation tracks. So compose in one area, mix in another, and just hide whatever you aren't working on at the moment.

[edit] If need to layer, all you have to do is duplicate the single midi track and all the routing comes with it and you have all articulations for that instrument available to layer with.
 
Last edited:
Top Bottom