Vienna Symphonic Library Forum
Forum Statistics

182,193 users have contributed to 42,209 threads and 254,699 posts.

In the past 24 hours, we have 2 new thread(s), 20 new post(s) and 50 new user(s).

  • Seeking advice/opinions on MIDI sequencing

    Hey all,

    A question for anyone with an answer:

    When sequencing orchestral type music using VSL (or any other VST's, for that matter), do you stay on the grid and let the computer "conduct" the orchestra (using tempo mapping to create rhythmic fluidity) or do you un-snap from the grid, ignore quantization completely and play the lines in yourself, using your own sense of rhythm to "conduct" the orchestra?

    I've read pros and cons for both approaches, but only in various articles.  I will, of course, try both approaches myself and experiment, but many on these forums create great sounding music and I'd love to hear individual opinions/experiences from those more fluent than I in MIDIstration.


  • I'm not sure there is one right way to answer this.  For orchestral mockups, a lot of samples have inconsistent amounts of latency, this is in order to capture the full sample attack, which for some instruments, especially strings, are kind of slow.  So sometimes people will move their notes ahead of the grid in order to compensate for that.  But that is a laborious and painstaking process.  Other people may using scripting in the DAW to adjust for that.

    Humanization is another reason perhaps to avoid the grid, but there are other more efficient means to provide timing humanization. 

    I myself prefer to have my notes on the grid and find other ways to dynamically add humanization, as well as latency adjustment for inconsistent attack transients.  It makes my midi editing about 1000 times easier to deal with.

    Opinions may differ.


  • last edited
    last edited

    @Seventh Sam said:

    Hey all, A question for anyone with an answer: When sequencing orchestral type music using VSL (or any other VST's, for that matter), do you stay on the grid and let the computer "conduct" the orchestra (using tempo mapping to create rhythmic fluidity) or do you un-snap from the grid, ignore quantization completely and play the lines in yourself, using your own sense of rhythm to "conduct" the orchestra? .
    Yes. All the above. I play most of my compositions on my keyboard then end up going back and quantizing most of what I played. Back in the day, you used to have to play sloppily then automate the tempo, then slightly fiddle with the tuning using a wide variety of articulations to try and achieve a natural sounding performance. Nowadays I still fluctuate the tempo but thanks to the miracles of VSL humanization and repetition performances those other processes can be automated and randomized. If you have VI Pro you can use the humanization feature in conjunction with the bigger full size libraries to automate much of what we did back in the day manually. If you don't have VI Pro then you'll have to do everything manually which can be quite laborious and painstaking.

  • Personally, I'm a big proponent of performing off the grid.  A lot of orchestral music is about making a gesture or crafting a phrase, and so being locked into a grid can make things too rhythmically precise.



    The exception is if there is a drum groove and I need to write something fast.


  • last edited
    last edited

    If you use Logic, this can be the best of both worlds:

    Alternating Live MIDI and Score Notation in One Track


  • last edited
    last edited

    @stephen limbaugh said:

    Personally, I'm a big proponent of performing off the grid.  A lot of orchestral music is about making a gesture or crafting a phrase, and so being locked into a grid can make things too rhythmically precise.



    The exception is if there is a drum groove and I need to write something fast.

    That video is actually what prompted this question!  Thanks again for making it.

    Do you ever run into situations where you want to score, say, an ultra fast and complex run that is too hard to simply play in on a keyboard and, since everything is off the grid, you end up spending way too much time fussing with individual notes to get it to sit right with the unquantized rhythm?  I would be concerned about that kind of thing with a pure off-the-grid approach (I'm not extremely proficient at piano nor is my MIDI controller very, eh, ergonomic).


  • last edited
    last edited

    @Dewdman42 said:

    I'm not sure there is one right way to answer this.  For orchestral mockups, a lot of samples have inconsistent amounts of latency, this is in order to capture the full sample attack, which for some instruments, especially strings, are kind of slow.  So sometimes people will move their notes ahead of the grid in order to compensate for that.  But that is a laborious and painstaking process.  Other people may using scripting in the DAW to adjust for that.

    Humanization is another reason perhaps to avoid the grid, but there are other more efficient means to provide timing humanization. 

    I myself prefer to have my notes on the grid and find other ways to dynamically add humanization, as well as latency adjustment for inconsistent attack transients.  It makes my midi editing about 1000 times easier to deal with.

    Opinions may differ.

    When you mention latency discrepancies, are you referring to actual errors in the sample start times or the musical differences in attack and timing inherent in certain articulations? (i.e. a fast, quick spiccato vs. a languishing crescendo)


  • last edited
    last edited

    @Seventh Sam said:

    Hey all, A question for anyone with an answer: When sequencing orchestral type music using VSL (or any other VST's, for that matter), do you stay on the grid and let the computer "conduct" the orchestra (using tempo mapping to create rhythmic fluidity) or do you un-snap from the grid, ignore quantization completely and play the lines in yourself, using your own sense of rhythm to "conduct" the orchestra? .
    Yes. All the above. I play most of my compositions on my keyboard then end up going back and quantizing most of what I played. Back in the day, you used to have to play sloppily then automate the tempo, then slightly fiddle with the tuning using a wide variety of articulations to try and achieve a natural sounding performance. Nowadays I still fluctuate the tempo but thanks to the miracles of VSL humanization and repetition performances those other processes can be automated and randomized. If you have VI Pro you can use the humanization feature in conjunction with the bigger full size libraries to automate much of what we did back in the day manually. If you don't have VI Pro then you'll have to do everything manually which can be quite laborious and painstaking.

    I do have VI Pro, so no worries there.  You mention that you don't always quantize.  When you do, is it to correct errors in your playing or to keep everything in a strict rhythm?  When you play, are you consciously emulating the fluid rhythm inherent in orchestral conducting or are you playing with the idea that you're going to quantize to a strict grid at the end of the day?


  • last edited
    last edited

    @Seventh Sam said:

    When you mention latency discrepancies, are you referring to actual errors in the sample start times or the musical differences in attack and timing inherent in certain articulations? (i.e. a fast, quick spiccato vs. a languishing crescendo)

    Its not "errors".  Some instruments, such as strings, have a lot of sound that happens ahead of the grid.  When real players play, they start sliding their bow on the string ahead of the grid in such a way that the transient peak will sound like its on the grid.  more or less.  There is a lot of timbral goodness contained in that full attack of instrumnets, but the reality is that if you trigger that sound via midi exactly on the grid, then the perception will be that the main transient peak of the attack is late, becuase a real player would be doing certain things intuitively with their instrument to start making sound 10's of milliseconds early, or perhaps even 100's; such that the transient sounds on the grid.

    its also not consistent from articulation to articulation.  Totally depends.  Depends a lot on how the sample library was created also.  Some libraries might be more consistent then others in terms of trying to make something that will be playable in a consistent way.  I have never measured my VSL libraries yet to see what the situation is with them.  Other libraries I have, such as Kirk Hunter, have quite a lot of this attack latency, and the tricky part is that its not consistent from articulation to articulation.  Cinematic Studio Strings is particularly challenging in this regard.  when I asked Kirk Hunter about it, he said that basically if we want the full sonic goodness of the string attack, then the latency needs to be there.

    As Stephen suggested, one way is to learn to play via midi ahead of the grid, so that the attack transient hits the right peak on the grid.  This can be done, but I find it difficult to do.  Its not the same as a string player doing it inutitively with their bow.  You have to strike the key early, by just the right amount.  It can be done and then you MUST AVOID quantizing your performance.  its particularly difficult if you are playing an instrument with different articulations coming in and out that happen to have differing amounts of attack latency.  Which can definitely be the case.  Trying to do that intuitively, well I find it difficult to do.  But it can be done.

    If you want to be able to quantize your notes, for whatever reason...then you'll have to find another way to make them play ahead of the grid.  There is currently not any good solution out there for this.  There is one freebie LogicPro script custom designed for Cinematic Studio Strings, that does this.  You can set negative track delay to move all the notes early...providing all the articulations on that track have the same amount of attack latency.  If the attack latency is different per articulation, then you can't really do that allone, you'll need an additional solution like the CSS script to vary how early to play ahead of the grid, per articulation.

    I like having my notes on the grid because its easier to read, easier to copy and paste around, etc.  But when you start looking closely at the piano rolls of the best sounding mockups, the notes are ahead of the grid.  they either played it in inutitively, or they painstakingly moved all the notes just the right amount ahead of the grid using transformers and other tricks.

    You can use scripters and such to try to automate this so that you can simply quantize the notes, and then let the scripter play them early...but admittedly, there are not easy solutions for this at this time.  Its an area where our tools need improvement.


  • ps - like I said, I haven't tried to measure my VSL instruments to see how much attack latency there is, or if its different per articulation, etc.  But a product suggestion for all sample developers, including VSL; would be to have a way to have the instrument report to the host a consistent amount of overall latency and then automatically adjust the attack time of each articulation relative to that, so that they all sound exactly the same amount late....and the Plugin Delay Compensation of the host can then bring them all back early again...and wala.....they'd all sound as desired...on the grid.  To my knowledge, no sample library does this, but I would be supremely impressed to find out if someone does.


  • Very good food for thought, Dewdman.  Thanks so much for taking the time to explain all of that to me!  I think you just saved me a lot of frustration and trial-and-error in the coming years.  I'll keep all that in mind as I experiment and practice MIDI sequencing.

    Cheers!

    - Sam


  • last edited
    last edited

    @Seventh Sam said:

    Do you ever run into situations where you want to score, say, an ultra fast and complex run that is too hard to simply play in on a keyboard and, since everything is off the grid, you end up spending way too much time fussing with individual notes to get it to sit right with the unquantized rhythm?  I would be concerned about that kind of thing with a pure off-the-grid approach (I'm not extremely proficient at piano nor is my MIDI controller very, eh, ergonomic).

    So, this kind of gets into personal philosophy about programming.  Personally, I view the computer and MIDI controller as an instrument to be technically mastered.  The ultimate proficiency should be a guy like Daniel Barenboim, a guy who can conduct/rehearse a top orchestra and also sit down and play the entire Beethoven piano sonata cycle.  Except in the case of "programming" your "conducting" is laying out the session and your performing is your fingers on the keyboards and mod wheel.

    That's the goal, but obviously not everyone is going to get to that Daniel Barenboim level of computer/MIDI controller mastery.

    A few tips to tackel technically difficult passages:

    1. The most important thing is the RHYTHM of the passage.  If you can get the feel of the gesture, but miss most of the notes, just go in and arrow up or down the notes to the correct pitches.
    2. Think about which notes of a gesture should be emphasized.  If it's a scale, generally it's last note, especially if it ends on a strong beat.  Adjust the velocities of velocity cross-fade so there's a little crescendo in there.
    3. Use two hands and go back and ride the velocity cross fade in a second pass.
    4. If you are using VI Pro, search through some of the pre-made scales in the APP Sequencer.  Often times you can adjust the notes in the sequencer and it still sound really convincing.
    5. If there is a percussive hit at the end of the run from other instruments, edit the midi so that everyone is playing really close together!

    Check this video out: 


    Notice how he gets her to not play so "correct."  On the page, those notes read as equal durations.. but he wants extra space here, hold a note longer there, more crescendo, etc etc.  I think this mentality should be applied to programming.


  • last edited
    last edited

    @Seventh Sam said:

    I do have VI Pro, so no worries there.  You mention that you don't always quantize.  When you do, is it to correct errors in your playing or to keep everything in a strict rhythm?  When you play, are you consciously emulating the fluid rhythm inherent in orchestral conducting or are you playing with the idea that you're going to quantize to a strict grid at the end of the day?

    Depends on the size of the ensemble.  If it's, say a string quartet or quintet, then I won't quantize anything.  Also, I'm a pianist by training so anything I do on the piano is never quantized.  But if it's a large ensemble piece like a 100 piece orchestra then yes I quantize because even slight rhythm inconsistancies will be amplified and the human ear will detect it.  It's the fluctuations in tempo, like slowing down slightly at the end of melody line which players do naturally, that make the piece sound more natrual.


  • last edited
    last edited

    @Another User said:

    1. The most important thing is the RHYTHM of the passage.  If you can get the feel of the gesture, but miss most of the notes, just go in and arrow up or down the notes to the correct pitches.
    2. Think about which notes of a gesture should be emphasized.  If it's a scale, generally it's last note, especially if it ends on a strong beat.  Adjust the velocities of velocity cross-fade so there's a little crescendo in there.
    3. Use two hands and go back and ride the velocity cross fade in a second pass.
    4. If you are using VI Pro, search through some of the pre-made scales in the APP Sequencer.  Often times you can adjust the notes in the sequencer and it still sound really convincing.
    5. If there is a percussive hit at the end of the run from other instruments, edit the midi so that everyone is playing really close together!

    Solid gold.  Thank you for taking the time to help me out.  #1 is especially useful; it makes a lot of sense to capture the human rhythmic feel live above all else as other factors (velocity, expression, pitch, etc.) can be easily altered after the fact.

    Another thought occured to me: it may be useful, in cases where semi-strict rhythmic to strict rhythmic accuracy is needed during difficult passages, to briefly turn the grid on - not to snap or quantize to, but to use as a visual guide to subtly adjust potentially sloppy live playing.  Once adjusted, the grid could be turned off and the entire phrase can then be moved around freely and fit to the rest of the music by ear.  Something to experiment with, certainly...

    One more question for you, if I may.  You mention velocity crossfading.  I know there are differing opinions on this, but I'd like to hear yours: do you recommend tying Expression and Velocity X-Fade together on one controller, doing one OR the other, or just using Velocity X-Fade to modulate dynamics?  I know that the timbral shift that occurs in Brass and Winds during crescendos and what not is not simply "raising the volume", but there are noticable jumps in between velocity layers for certain instruments, especially the solo VI libs.  Any recommendations here?

     

    Thanks again!

    - Sam


  • last edited
    last edited

    @jasensmith said:

    Depends on the size of the ensemble.  If it's, say a string quartet or quintet, then I won't quantize anything.  Also, I'm a pianist by training so anything I do on the piano is never quantized.  But if it's a large ensemble piece like a 100 piece orchestra then yes I quantize because even slight rhythm inconsistancies will be amplified and the human ear will detect it.  It's the fluctuations in tempo, like slowing down slightly at the end of melody line which players do naturally, that make the piece sound more natrual.

    Good to know, thank you.  Notes taken, knowledge absorbed.

    - Sam


  • last edited
    last edited

    @Seventh Sam said:

    One more question for you, if I may.  You mention velocity crossfading.  I know there are differing opinions on this, but I'd like to hear yours: do you recommend tying Expression and Velocity X-Fade together on one controller, doing one OR the other, or just using Velocity X-Fade to modulate dynamics?  I know that the timbral shift that occurs in Brass and Winds during crescendos and what not is not simply "raising the volume", but there are noticable jumps in between velocity layers for certain instruments, especially the solo VI libs.  Any recommendations here?

    I never tie any parameters together.  And usually, Expression is used only when I can't get the desired fade with velocity xfade alone.

    I also wanna add another option for dealing with a technically difficult passage, and nailing the rhythm.

    If it is 16th notes, just play the 8th notes, then manually fill in (eyeballing it) the 16th notes.  For example, you have C-D-E-F as rapid 16th notes, play just the C and E, then write in the D and F.  When playing, it might help to still count the 16th notes in your head, but only play on the 8ths.... side effect is that it will help develop your technique and overall rhythm by counting like this.


  • Stephen,

    Thanks again for taking the time!  All your advice is extremely helpful and I will definitely take it into account as I continue to experiment and practice.

    Sincerely,

    - Sam


  • last edited
    last edited

    @Dewdman42 said:

    ps - like I said, I haven't tried to measure my VSL instruments to see how much attack latency there is, or if its different per articulation, etc.  But a product suggestion for all sample developers, including VSL; would be to have a way to have the instrument report to the host a consistent amount of overall latency and then automatically adjust the attack time of each articulation relative to that, so that they all sound exactly the same amount late....and the Plugin Delay Compensation of the host can then bring them all back early again...and wala.....they'd all sound as desired...on the grid.  To my knowledge, no sample library does this, but I would be supremely impressed to find out if someone does.

    Audio Imperia Nucleus does this (but not via PDC). It knows the 'sync point' of every sample and ensures that all samples have the same offset to the sync point, so you can plug a single negative MIDI timing adjustment to the (multi-articulation) track and get everything tight. For live playing you can turn a knob to temporarily move the start offset to get less latency (and less realism) and better timing and feel, then turn it back.

    It's a brilliant idea and every sample library should have this.


  • It's interesting how differently people work. I no longer use the grid at all, for anything. I find that things sound not locked in, as opposed to super-locked but robotic. This is because of different latency with different instruments (and across their ranges and articulations).

    I have tried the humanize feature, and only like it for the slight pitch and expressive effects as opposed to timing. But once I start moving over more to Finale and Dorico as my STARTING POINT for compositions vs. using them to finesse scores and parts extractions of legacy projects that started as MIDI, I might change my mind on that.

    As it stands, I find Digital Performer to be remarkable in how it handles quantization. So even parts that start as notation get filtered through its quantization functions, with appropriate amounts of sensitivity, emphasis, swing, and randomization, and I find this does wonders, but often requires several passes before finding the best parameters for each part to both breathe and lock in better with other parts in a way that is like a live performance.