Vienna Symphonic Library Forum
Forum Statistics

182,952 users have contributed to 42,268 threads and 254,960 posts.

In the past 24 hours, we have 7 new thread(s), 15 new post(s) and 50 new user(s).

  • Reverb Advice?

    Hi everyone,

    Sorry to start a new thread but my questions are reverb specific now.

     

    Can MIR or any reverb be made to do the following? How?

    1 - The room needs to respond. If a bone gets loud, the space responds more, with more vibrance. This is an acoustic fact of a real space. How can I get reverb to mimick this? Is it possible?

    2 -  Call me crazy, even stupid... but I feel like there is a fundamental problem with the concept of having a wet/dry mix for reverb. This is how audio and effect is processed. I know. But it feels like a cheap way to mimick the distance of an instrument. It places a dry recording on top of a room, or on the side of it. But I feel like the instrument is never truly "in" that sonic space. It doesn't have a "whole" sound you get from a live recording. Does anyone feel the same way and have any advice on how to achieve better results? Is there maybe a diference scientific way to look at it other than how it's being done now?

     

    Thanks,

     

    -Sean


  • I think that convolution reverb is as close as we currently get.  And the MIR concept takes it to about as far as I think anyone has to this point.  With convolution you're basically filtering the sound of the instrument with the resonance of the space that's being modeled by the impulse response of that space.  If you hit it harder with a louder sound, that sound will trigger a more resonant response.  When convolution reverberation came along a number of years back it was based on a single impulse response sounded from one position in the space and recorded with a single microphone.  While getting closer to the sound of a 'real' acoustic environment it lacked the richness of sound as it occurs in a real space.  Depending on where the sound is made in that space, and where the listener is positioned in that same space, the mix of the direct sound of the source and the resulting reverberation are differenct.  That's one reason the why cheap seats in a concert hall are cheap!  It's not just a function of how far from the stage you sit.  The acoustics of the hall are different from every conceivable position.  

    So MIR, given that it's working with thousands of unique positions in the spaces it models is a lot closer to the real situation of 'being there'.  You get to actually place an instrument in a specific position in the modeled space and the right cues are generated for our ears to 'place' the sound there.  In a stereo context alone this is a huge advantage to using artificial reverbs that basically do what you're describing... wrapping the sound of an instrument with a generic resonance that doesn't change in response to your left-right panning.  With multi-impluse convolution reverberation, such as MIR, you not only get much more accurate left-right definition to the acoustic representation of the space, but also depth cues based on how far away the source sound is from the microphone.  You can fake some of this with a non-convolving reverberation by using eq to mute the high frequencies of more distant sound sources but, to my ears, that doesn't come close to the rich results of the multi-impulse approach.


  • If I'm not mistaken Acustica's Nebula platform has the ability to do just that. It can capture the different responses on different input levels. Regrettably there is just one attempt I know of to capture a real room: RoomHunter's 'Theater of life' (which is now offered as a freebie). The concept is somewhat similar to MIR. They captured a room through various mics at multiple distances. So this could really have been what you are looking for. Too bad they chose a heavily coloured room to sample. I guess they were coming from a pop/rock background where the coloration may be wanted. I found it to be unsuitable for orchestral music though.

    I can relate to what you are saying in your second paragraph. Can you give an audioexample where you find the reverb to be unconvincing? It's difficult to theorize what could be improved without a concrete example. Maybe somebody skilled with MIR could give you some tips then.

    As an aside, I use Independence Origami to place my instruments on stage. It has a visual approach as well, though nowhere near as refined as the one MIR presents. But you can load your own IR's, and most of all I find the results to be very satisfying. But of course the IR does not change no matter what volume you throw at it.


  • It's hard to imagine how else a reverberation, added after the fact, would work in any other way than a dry-wet scenario.  That's simply what you're doing... adding reverberation (wet sound) to a, preferably, un-reverberated source (dry sound).  

    What's being controlled in that case are the cues for the balance between the direct sound that reaches the listener's ears first (aka the dry sound) and the the early reflections+reverberation (aka the wet sound) that the resonant characteristics of the space the sound appears in provide.  Having control over that dry-wet balance allows the sound designer to vary the distance the virtual instrument appears to be from the listener.  

    Of course, having control over that balance means that one can easily create unbalanced, unrealistic mixes.  That's where MIRx and Natural Volume are useful in that, together, they provide an optimal relationship between multiple sound sources and reverberation of the modeled space.  There's still dry/wet balance to control the cues the ear uses to detect the distance from the listener to the orchestra.  

    But, at the end of the day, it's still a simplified model of richness and complexity of what goes on in a real-world situation of sound events occuring in real space.  There's an infinite number of unique positions for sound event and listener.  Acoustic and psychoacoustic research works with the notion of a "just noticable difference" (JND).  There are differences that are so small that we can't perceive them.  This suggests that we don't actually need the infinite set of impulse responses to work with, but that there will be some large set of impulse responses that exceeds the JNDs for any given space.  Like sampling rates that only need to reach a certain rate before we can't hear any difference any more this suggests that we'll get to a point where, using the convolution technique, it will be as good as it can get.

    I've poked around on the net and can't find any current research that goes beyond the multi-impulse convolution model.  I'll bet quantum computers, when they're finally up and running at full bore, will provide a breakthrough though.


  • Here's a good paper on convolution reverb that explains in detail the multiple impulse response approach, what it is, how it works, and why it works as well as it does..

    http://web.uvic.ca/~timperry/ELEC499SurroundSoundImpulseResponse/Elec499-SurroundSoundIR.pdf


  • Kenneth,

     

    Thanks. I'm familiar with how MIR works and why multiple IR's are necessary to get as close as possible to an accurate model of a space. The thesis below is what first introduced me to why several years ago.

    http://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=1708&context=etd

    However, MIR still uses the same method of mixing the dry/wet signal as any other verb plugin. Does it not? (I may have just made Deitz feel a distubance in the force if I'm wrong lol) MIR does it on a larger scale, but it's still dry signal mixed in with a wet signal... and the wet signal equals the dry waveform being repeated in such a way that it mimics the amplitude of the IR's waveform.

    Please forgive what may be an inaccurate description. From my understanding, that should be correct (or close to it). My point is this: is there no other scientific way? If not, is there something I'm missing on getting the dry waveform to sound more like it is "in the space". Are there any tricks I should know about?

     

    -Sean


  • Kenneth,

     

    Also, to be fair...

    I LOVE MIR in some instances. For a trumpet, wow. Really, wow. I did a horn test once which was passable, not great. Dietz made an upright bass jazz piece once that was astonishing. But bones and strings: I never got right. I don't have a demo anymore so I can't test it. But the problem is that I'm relying more and more on samples that I love the sound of, but aren't flexible enough. I need that to change now more than ever. VSL is truly my favorite library in almost every respect except for the way it sounds "in a space" and how much effort it takes to program (although I may have solutions to that already). So if I can knock out this space issue I think I'll be set for life.

    I just had to say that I really do love MIR. I just never get certain instruments to sound right. I don't know and didn't when I demo'd MIR. Thus the request for help.

     

    -Sean


  • last edited
    last edited

    @kenneth.newby said:

    I'll bet quantum computers, when they're finally up and running at full bore, will provide a breakthrough though.

     

    lol, I love it! Probably true. 😊


  • Dominique,

     

    Thanks for the info. I haven't heard enough from Nebula to know yet but I'll investigate it more. Although I found this link in my search. Some interesting stuff.



     

    -Sean


  • I had a question for the IR specialists.

     

    I have the feeling that every convolution reverb still has its own signature sound, no matter what IR you feed it with.  Is that true ?  Where does this signature come from ?

    Same comment here about MIR : for voices I can't think of anything better.  For percussion, wow !  But indeed for strings, I can't get it where I want.  Could it be that MIR's own signature sound emphasizes some problematic frequencies in specific sounds ?

    I noticed that if I use the VSL normal convolution reverb with (I think) the same simplified IR's of the same room than in MIR, for example, the Teldex one, I can hear, apart frome the much greater complexity and realness of MIR, that the convolution reverb has another, smoother signature.

    If this apperas to be so, is there a way to control the signature sound ?

    Best regards.

     

    Stephane.


  • Agreed, Nebula is interesting technology. I'd just wish somebody would sample a concert hall or scoring stage that way. If done properly it could bring something new to the table as far as convolution reverb goes. Alas, wishful thinking for now.

    Another alternative I forgot to mention is the UAD Ocean Way Studio reverb. Same concept as MIR. It includes a remicing option. You need a UAD card to run it though, which is why I didn't try it out.


  • last edited
    last edited

    @scoredfilms said:

    2 -  Call me crazy, even stupid... but I feel like there is a fundamental problem with the concept of having a wet/dry mix for reverb. This is how audio and effect is processed...

    Hi

    It is true that our brain creates the feel of room depth with the ratio "direct signal/room reflections" (dry/wet so to say). Unfortunately not every reverb effect produces the reflections which lead to the same feeling than in the real acoustic world. 

    If you compare different reverb effects you will see that convolution reverbs often produce nicer depths but algorithmic reverbs have a nicer tail with less colours (fade out). So best results you get by combining both advantages:

    For creating best depths I would choose "impulse response reverbs" (convolution reverbs.). Nevertheless, you still have to find an ideal IR. So try to find an impulse (IR) which simulates a really good depth. How to find such an IR?

    Set the convolution reverb to 100% wet, let an instrument play and observe which IR gives you the farthest distance. If possible take now only the first 30-100ms of this IR (the first reflections). Use the volume curve of the reverb for fading out the IR.  Add an Algorithmic reverb in the chain with a delay of 30-100ms an without the early reflections (we already got them with the IR of the convolution reverb). Now you are able to choose between several depths - of course with the wet/dry ratio. Enhance wet distances a bit with "taking away the high frequences" and the feeling of "far away" and "distance" is perfect.

    Example  with IR for the first ms + Algorithmic for tail.   (this could be a possible DAW-Routing)

    Even if this example is not produced with the Hybrid Reverb of VSL those effect offers the possibility within one effect.

    If you still believe that the sound is too much coloured you can equalize the IR with an EQ (built in with the Concvolution reverbs of VSL).

    All the best

    Beat


    - Tips & Tricks while using Samples of VSL.. see at: https://www.beat-kaufmann.com/vitutorials/ - Tutorial "Mixing an Orchestra": https://www.beat-kaufmann.com/mixing-an-orchestra/
  • Beat,

     

    Thanks for the reply. Although I'm more concerned with hall response than depth. Please listen to the examples I've shared on Google Drive. They are all from wet sampled instruments. Keep in mind these are different spaces and different samples. But I believe we gain something in this comparison:

    http://goo.gl/1ryfxn

    1 - Close (VSL is clearly more capable of getting an imtimate sound, very useful)

    2 - Mics in the hall (VSL and 'other' sound pretty comparable in terms of washyness in the space)

    3 - A mix of mics. Can MIR accomplish something to this effect? I'm not sure if wet/dry fading really does...?

    Now the big deal to me is this...

    4 - Horns & Bones. Listen to the way the room responds to the amount of power coming from the instruments. It's georgeous to me. I'm in love. lol I am EXTREMELY invested in getting comparable results from VSL. The amount of flexibility alone in the way VSL's instruments are designed would prove this invaluable to me. I still think having multiple libraries has it's uses. But in this case, I feel it's the one thing I desire most from VSL that I can't get. I fully admit it could be my lack of know-how. But I'm still lacking demos from others that accomplish this. I'm wondering what can be done. That certainly isn't a crticism though. VSL is brilliant. I'm being very picky about a reverb issue, not the samples.

     

    Thanks,

    Sean


  • last edited
    last edited

    @scoredfilms said:

    [...] 3 - A mix of mics. Can MIR accomplish something to this effect? I'm not sure if wet/dry fading really does...?

    [...]

    The "Dry / Wet"-mix in MIR Pro should be seen as the amount of close-microphones mixed with the signal derived from actual main microphone array. This is how it's done by most recording engineers dealing with real orchestras.

    The "Dry" signal in MIR is NOT the plain input. It's the readily positioned and pre-processed direct signal component like it would appear from any source in a natural acoustic environment.

    The main difference (and a big advantage MIR has over a real recording setup) is that the runtime delay between the direct signals recorded by the close microphones and the main array is completely compensated by MIR, thus avoiding any phasing issues which might occur otherwise.

    HTH,


    /Dietz - Vienna Symphonic Library
  • Dietz,

     

    Thanks, gtk. On a personal note to your genius work on MIR... I want you to know I just messaged someone the following:

    "Fyi, I posted on the forum after hearing that exact mix from Dietz (the Duke Ellington). I could not believe how perfect it was. It truly is brilliant. I also downloaded the WAV file and it sits on my hard drive in it's own spot. It has been there for a while and it isn't moving. I still go to it sometimes. That's how pathetic I am. It really is brilliant."

     

    I may have a couple instruments I'm picky about with verb. But believe me, I recognize how remarkable MIR is. :)

     

    -Sean


  • last edited
    last edited

    @scoredfilms said:

    ...It places a dry recording on top of a room, or on the side of it. But I feel like the instrument is never truly "in" that sonic space. It doesn't have a "whole" sound you get from a live recording.

    It is true that real recorded instruments in real rooms appear in an other way than mixes with samples. With real recordings we have time delays between the microphones which give us this nice room and "spacy" feeling. Specially recordings with less microphones and recorded in AB-technique can lead to such nice space feelings.

    With samples we pan the signals from left to the right which means that we mainly have a different volume between left and right. Even if we have true stereo reverbs (which are producing different reverb signatures for the right and the left channel) it is finally only a simulation of the reality. 

    But there is a trick: You can get some of this airy and roomy feeling of a real recording sessions by choosing different depths for different instrument sections. Further you can overdo these different depths a bit no problem. This trick is a simulation as well but can lead to a more transparent and a more interesting mix.

    Listen to this example and observe the different depths (close and far instruments). The whole mix appears roomy and airy even if there are only those mentioned pannings, doesn't it?

    Could be that you are looking for this "sound"...?

    If yes then your problem is a matter of depth... and not a matter of "hall" as you pointed it out above.

    Beat


    - Tips & Tricks while using Samples of VSL.. see at: https://www.beat-kaufmann.com/vitutorials/ - Tutorial "Mixing an Orchestra": https://www.beat-kaufmann.com/mixing-an-orchestra/
  • Of course convolution reverberation only provides a model of a specific acoustic space.  Like any model it's a simplification of the real world.  Convolution assumes a linear, time-invariant 'world' that it then can model quite accurately.  

    But in the real world all kinds of non-linearities creep in, the way the materials of the room respond to different levels of sound is the most obvious one that would explain the differences between lower and higher level sounds.  There's also apparently a huge challenge in getting impulse responses that have the best (greatest) signal-to-noise characteristic.  Any residual noise left in the impulse will created will reintroduced when the model is excited by a sound.  I wonder if that also might explain why a louder sound will bring up a different sounding revererant response?  

    Either of these, a non-linear space treated by a model assuming linearity, or the non-linearity of the impulse measuring system itself, might account for the differences Sean is demonstrating.  

    Beat's suggestion of mixing the best of both worlds is a good one and reminds us, again, that it's our ears that are the final arbiter of quality, not slavish adherence to a single idea just because it's theoretically better, or worse yet, the hip thing of the moment.


  • A nice example you provided Sean. I tried to match it as closely as possible with 'artificial' reverb. As there obviously is no dry example for the brass I had to use samples to recreate it. It's not as nice, but don't let that distract you. It's about the reverb after all.

    That's as close as I got in reasonable time:

    http://goo.gl/GFTVP1

    The original is a bit wider, which in hindsight I should have matched more closely in the reverb tail. Here's a quick and dirty after the fact solution (I widened the mix of the audio file):

     

    http://goo.gl/PIj9pU


  • last edited
    last edited

    @Another User said:

    There's also apparently a huge challenge in getting impulse responses that have the best (greatest) signal-to-noise characteristic.  Any residual noise left in the impulse will created will reintroduced when the model is excited by a sound. 

    That's true, but with some effort we are able to capture IRs with a signal-to-noise ratio better than the range covered by most average A/D-converters.

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dominique said:

    A nice example you provided Sean. I tried to match it as closely as possible with 'artificial' reverb. As there obviously is no dry example for the brass I had to use samples to recreate it. It's not as nice, but don't let that distract you. It's about the reverb after all.

     

    Dominique,

    Thanks, that's a great example for comparison. I agree that this isn't a samples issue but a reverb thing. There are some sample differences of course. But both are great and equally usable. My example has less high and a bit more low end in it. So I'm also keeping in mind to ignore that as well.

    The biggest thing I noticed was that the early reflection in the room seems to have a lot of excitement. The entire tail sounds like any verb I think would 'continue' the sound. But at the very beginning, I'm listening to the way the low and high seem to interact with the space. There is a vibrance there that I feel is missing in the dry-to-verb example.

    I'm sure it's possible to emulate. But until I can peg down what it is, it's hard to talk about how. I wonder if some kind of processesing needs to be done on the early reflection. Maybe even based on the instrument. Again, call me crazy. I'm sure Dietz and Beat think I'm nuts! lol That's just what my ears tell me. Sometimes translating ears to informed knowledge and then to language is hard.

     

    Beat, for the record... I love listening to all of your examples. 😊 I'm still not convinced it's a depth issue.

    -Sean