Vienna Symphonic Library Forum
Forum Statistics

182,331 users have contributed to 42,219 threads and 254,756 posts.

In the past 24 hours, we have 3 new thread(s), 9 new post(s) and 51 new user(s).

  • Mixing a cinematic but natural orchestra

    Hi everyone, I have VSL Special Edition and SE Plus - very exciting! I'm making an orchestral template for general use. I want it to sound crisp, clear, and tight, but I also want it to sound natural. Here is an incomplete piece I'm composing:

    http://www.suonlabs.com/music/incomplete/galactic-swashbuckler.mp3

    Unfortunately I don't have a good monitoring setup. So I want to ask the VSL Community: does the placement of each instrument sound natural and convincing? Any comments about stereo panning, stereo depth, stereo spread, early reflections, or wet/dry mix? Overall does it sound "professionally" recorded and mixed? Any other opinions and feedback would be appreciated, too!

    Cheers,
    ~Shawn

  • Wow, man... that sounded REALLY good!

    Just a few things .... and mind you, i am no Guy Bacos or Jay Bacal, so my opinion is fairly worthless, and my monitors are crap... anyhow, the actual placement of instruments sounds pretty darn good, the only thing that "stuck out" to me was the trumpets playing the lead melody seemed just a tad too obviously left of center. Im not sure if those are multiple instruments playing those lines; if so, i would say maybe spread them apart from each other just a bit. But if its just one instrument patch, then i guess it makes sense as is.

    The only other thing to me is the "depth" of the reverb, maybe the early reflections. I prefer myself things that sound like it is in a live hall, rather than a pristinely mic'ed up recording environment, and overall some of it sounds just a bit too "up front." For example, when the change occurs at around 30 seconds and the inst volumes get quiter. that sounded much more realistic to me, likely because the softer volumes allowed more of the reverb's tail and depth to come through. Maybe just a bit more of the wet verb mix needs to come through? Speaking of reverb, what program(s) are you using for your reverb, and how are you routing it? Aux sends, directly on the channels? Just curious.

    honestly, though... very good overall! :) Anyhow, the stuff i compose is always much darker and morose; your stuff is energetic and triumphant, so maybe im not the best person to listen to about it. But very impressive! Keep us updated on any modifications you  make. Good job!

    -michael


  • Hey Michael, thanks for the comments!

    If I keep the settings as-is (maybe reduce the brass volume by 1-2 dB), you think this would be a good template for simulating a studio environment?
    But, really, I did want to have a "concert hall" sound like you said. So I'll try to adjust the wet/dry mix as you suggested.

    About the reverb:

    The set up is a "poor man's true stereo" convolution reverb. It uses two convolution reverb plugins running in parallel, placed on aux sends. The "stage left reverb" takes only the left stereo channel as input, uses an impulse response with sound source located on stage left. The "stage right reverb" takes only the right stereo channel as input, and uses an impulse response with sound source located on stage right. The physical interpretation of this is that the orchestra is playing through stereo speakers placed on stage =) When hearing the wet/dry mix combined, I think the illusion is almost OK.

    With this setup, then I tried to adjust panning, stereo separation, and wet/dry mix. In some cases I even tried to muffle high frequencies to simulate distance. In particular for horn, where the bell faces backwards.

    The exact software I'm using is FL Studio and the "Fruity Convolver" reverb plugin. I honestly don't know how it compares to other convolution reverbs like Cubase Reverence, Waves IR-1, or Vienna Suite. As far as I know there are only a few ways to implement DSP convolution, and the sampling/interpolation will be good enough on any of these plugins - the impulse response data matters more. For good IR data, I predict that Waves IR-1 and Vienna Suite would be the winners. But for now, FL Studio provided a decent stage-left/stage-right pair of impulse responses recorded in a "civic hall". The original impulse responses were quite boomy, so I applied EQ to reduce low frequencies and a small boost above 2 KHz. I also tried some good impulse responses I found online, but so far I haven't been able to mix those to sound the way I want.

    My biggest concern was early reflections - placing the instrument somewhere in the middle of the stage, you would hear early reflections from both impulse responses, neither of which represents a sound source at the same location as the instrument. I was worried it would not sound like a realistic stage without physically correct early reflections. The real solution is to get Vienna MIR eventually =)


  • Hi Suon, I like your music and I'm very interested in your method, to be honest I did'nt understand it, and why you need a double processing left and right. I use Altiverb, but with some problems... I make using 4 instances of Altiverb for different depth,strings, wind, brass, with Todd AO preset. I'd like you hear my recording and tell me what you think; you can listen here : http://www.myspace.com/antoncct Antonio . Italy

  • I'll explain the "true stereo" convolution reverb from the beginning. I know its a large post, but maybe someone will find this information useful. Any experts can verify or correct me, please!

    An "impulse response" not only captures the acoustics of an environment, but it captures the properties of the sound source and microphones. This means that the *location* (and other properties) of the sound source and microphones is hard-coded information in the impulse response. If the impulse response was recorded with sound source in location "A", and microphone in location "X", then we can simulate any sound to realistically sound like it is located at "A", with a microphone placed at "X", using a convolution reverb plugin.

    If we want to make the sound appear to come from somewhere else (with realistic sound), then we need a different impulse response.

    To mix an orchestra realistically so that each instrument sounds like its coming from the proper place on stage, we cannot use one "fake stereo" convolution reverb. If we tried that, panning the instrument would sound weird, and would not sound like we are realistically placing instruments in different locations on stage. The reason is that both left and right channels are processed independently... if we label the "L input" as the left stereo channel, and the "R input" as right stereo channel, a fake stereo reverb will only process two things: "L input --> L output", and "R input --> R output".

    A true stereo reverb has more interconnections: "L input --> L output", "L input --> R output", "R input --> L output", and "R input --> R output"... that's 4 connections instead of 2. The benefit of this is that we can approximate how the reverb will change when panning the instrument.

    For convolution reverb, "true stereo" can be achieved by using two convolution reverb plugins. The first plugin receives only "L input". Most likely the plugin expects a stereo input, so you would actually route "L input" to both L and R inputs of the plugin. This first plugin will have an impulse response that realistically simulates what it sounds like when a sound comes from the left side of stage. The second plugin receives only "R input", and will use an impulse response that simulates what it sounds like when a sound comes from the right side of stage.

    I hope this explanation makes sense...

    Cheers
    ~Shawn

  • Anton, I really liked the music! Here is my opinions... but please keep in mind I am not an expert =)

    - The orchestration and harmonies are really nice!

    - There seem to be some problem frequencies in some places, particularly notes E, G, and B. I'm guessing around 660 Hz, 990 Hz, or 1320 Hz?? I think the problem is in the reverb. It might be my headphones, though.

    - The individual solo woodwind instruments seem too large. In particular, the low registers of the flute seemed too powerful to be real. Did you want that effect, or did you want something more realistic? For my piece, I actually reduced the stereo separation of each woodwind instrument close to mono. The reverb was still stereo, and I still panned the instruments appropriately. Perhaps you can try something similar.

    - The Horn in "Cinematic" could probably use some EQ to reduce high frequencies. This simulates the distance and the bell facing backward. For example, perhaps a low-pass filter with a very gentle slope, starting around 5 KHz.

    - I liked how you placed the strings... Maybe the violas and cellos could be 2-3 dB louder in most places.

    - The brass seems well balanced, too. I like the timbres you get, the way you orchestrated some of the brass chords!

    Cheers, ~Shawn

  • Thanks Suon. I'm not an expert, but I think is not possible to use Altiverb as left or right reverb only. All the impluse responses are recorded in stereo with 2 microphones. Furthemore I don't know how to extract in my sequencer the component left and right for every channel. I want to use multiple bus , for different depth, I think is not enough to muffle high frequencies for simulate distance, so I need a reverb for every bus, that have two mono channel for every bus, no idea if exist and how to get this. May be you have more information about and can suggest me some product. (Anyway ... it's very useful to change methods and ideas on this forum, not just about tecnique matter, about composition and orchestration too. ) Antonio - Italy

  • Suon, I just read that Altiverb support true-stereo impulse. Is this the same of your double-mono method? Can you tell me where can I find some impulse response in true-stereo? Thanks. Antonio

  • Hi Anton,

    I'm sorry I don't know how to help with the routing of L and R channels... maybe someone else can help.

    The "true stereo" in Altiverb and Vienna Suite are likely to be much better than my version. There are probably details and features my method does not have. Equally importantly, those softwares will provide high-quality impulse responses intended for our purpose. I think true-stereo impulse responses will either be a pair of .wav files, or they will be in some special format specific to the software. So probably you can just choose from the impulse responses that Altiverb already provides =)

    I think your reverb setup is already pretty good (except for the woodwinds, which I gave suggestions to fix in an earlier post). It seems like a great idea to use different reverbs to get different depths of the sound... I assume you are setting each reverb to represent the same concert hall, that would be important. Don't forget you can also tweak the EQ of the wet reverb signal before mixing it back with the dry signal; you can change wet/dry mix; and in the dry mix, you can change stereo spread, too. All these tweaks combined can be used to perfect your reverb - even though its already good as it is =)

    ~Shawn

  • Hi Suon, I appreciate your suggestion about winds. I experimented a bit and arrived to the same conclusion, that is better use the winds in mono, or in a narrow stereo position. At this moment I don't know if the library of Altiverb is in true stereo or not, maybe it is, but it need to rename the files, a little difficult for me... Anyway the sound is not bad, but a little boomy, I will try to equalize it to have a more natural sound. Altiverb have the stage positioning function too, that allow to change a bit the position of the instruments, but if the impulse is not true stereo, I suppose that this is a distortion of the original impulse. To avoid problems to space position, I don't use at all dry signal, but only the wet signal 100% processed inside Altiverb. Of course I use the same room, but Altiverb have a problem with his library : all the room, except one,the TODD AO, are made moving the mic and not the speaker !! This is very strange, because in the real things, the position of listener is one, and are the instruments on the stage that have different positions. I download too the Bricasti M7 impluse response that are free, but they do not have multi position speaker or mic, so cannot use a same room with different depth like Altiverb can do. Anyway I liked very much your piece, and the way how you get deepth with just one impulse response. When I have time I will try to do the same and compare the result with the multi-position of Altiverb. Antonio

  • last edited
    last edited

    @Another User said:

    I download too the Bricasti M7 impluse response that are free, but they do not have multi position speaker or mic, so cannot use a same room with different depth like Altiverb can do. [...]

    Algorithmic reverbs are hardly ever really multi-positional.

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited
    Antonio

    @Antoncct said:

    [...] Of course I use the same room, but Altiverb have a problem with his library : all the room, except one,the TODD AO, are made moving the mic and not the speaker !! This is very strange, because in the real things, the position of listener is one, and are the instruments on the stage that have different positions.

    This is one of the misconceptions that lead us to the invention of MIR 😉

    Kind regards,

    I will never buy MIR, not because of the price, but because it needs a "monster computer" to run. Hope you can make a more light version of MIR, that can run on a "normal" computer, like my laptop I7 with 4 or 8 Giga Ram . ANTONIO

  • Today's "monster computers" are the average machines of tomorrow. :-) But you're right: 4GB RAM will take you nowhere when it comes to more complex applications that make full use of modern 64bit operating systems.

    ... a "more light version" than the already stripped-down Vienna MIR SE is not in the pipeline, sorry to say so.

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • PaulP Paul moved this topic from Orchestration & Composition on