Vienna Symphonic Library Forum
Forum Statistics

182,262 users have contributed to 42,216 threads and 254,737 posts.

In the past 24 hours, we have 3 new thread(s), 22 new post(s) and 38 new user(s).

  • Question(s) about MIR Pro

    If anyone (Dietz) could confirm that I am on the right track regarding MIR Pro, I would be grateful.

    An impulse is, in affect, simulating the frequency range of any particular instrument, and those impulses were fired from thousands of positions around a venue, and the program calculates the signal received at mic capsule (a) and mic capsule (b) and sums the two together thus giving the perception of a virtual mic capsule position. 


  • Hi and welcome, JBuck,

    does this short summary of technologies used by the MIR engine help ...?

    -> https://www.vsl.info/en/manuals/mir-pro/think-mir#convolution

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • Thanks for the reply.

    From the article:

    "room" is all about three dimensions. Any signal source will sound different from any possible position within one and the same room; and it will sound different again from any of the possible listener's positions.

    I now understand that the impulses are combined mathematically through the convolution concept to provide omni source locations . But what system combines the responses that provide omni response locations?

    Cheers, 


  • last edited
    last edited

    @JBuck said:

    [...] But what system combines the responses that provide omni response locations

    Hmmmm ... I'm not sure that I understand the question.

    Let me put it that way: MIR is all about position and direction. Each source position in a MIR Venue is made up of 8 individual impulse responses, derived from a directional loudspeaker source (... directional in the sense of "sections of 60°, plus one to the floor and one to the ceiling"). Each signal source that feeds these IRs through the convolution process relies on its "Instrument Profile", which defines how the frequency dispersion of the source will look like in any of the 8 aforementioned directions. The combination of these convolved signals is achieved by a simple addition (plus a low-frequency compensation filter).

    ... all of this gets represented by the virtual Ambisonics microphone capsules, depending on the settings chosen for their Output signal.


    /Dietz - Vienna Symphonic Library
  • Thanks for your reply.

    I understand how the instrument position can be simulated by adding the convolved signals. But I don't understand how the listener (virtual mic array) position is simulated.

    [url=https://ibb.co/PNLqfP2][img]https://i.ibb.co/Btp0SQh/1.png[/img][/url]


  • Any area that shows the yellow-ish overlay is called "HotSpots". These are the areas (plus some off-set single spots) we covered when we sent the impulses into the room. Consequently, these are also valid areas for signal sources (most likely instruments).

    The white spots show the positions we used for our Ambisonics microphones when we recorded the impulse responses. Usually the "microphone flowers" (which graphically represent the chosen number and patterns for virtual capsules) will sit directly on these spots. When you move them to different positions of the Venue Map, the Ambisonics decoding is changed according to the new relations between source and listener (read: the rotation and angles between the virtual microphone capsules will change). This is great for centring the stereo image of a stage. The IRs used in these cases stay the same, though, so there won't be different reflection patterns or the like. You will have to select different Main Mic / Secondary Mic positions to achieve this.

    ... maybe you just tell me what you're trying to achieve, and I'll try to outline possible solutions. 


    /Dietz - Vienna Symphonic Library
  • Thanks again for the help.

    All I am trying to achieve, at this point, is an understanding of how we can simply place the mic array (flower) anywhere in the venue, when these mic positions are not actually part of the program. You mentioned decoding, that must be it I suppose. I will not dare to ask you about the ins and outs of Ambisonics decoding. I might hit the Ambisonics site for that.

    Cheers.


  • No need to overcomplicate things. :-) For your use-case you can simply think of that feature as some kind of glorified panning device which adjusts a myriad of parameters in the background as soon as you change the position of the virtual microphone. In other words: Don't expect "different sound" (frequency- and reflections-wise, that is), just "different perspective" and "different geometry".

    Sidenote: If you plan to use that feature lavishly, then it might be advisable to activate the option "Consider Microphone Offset" in the Dry Signal Handling section of MIR Pro's instrument panel on the right for more predictable results.

    HTH,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:

    : Don't expect "different sound" (frequency- and reflections-wise, that is), just "different perspective" and "different geometry".

    Right, so MIR is simulating the listener perspective without actually simulating how the listener would hear the source - early reflections, left ear, right ear. If I was sitting hard up against the right wall of the venue in the 15th row, I would definitely be receiving more direct sound waves from the left side of the orchestra than the right. But you are saying that MIR does not simulate this. OK, great. Thanks again for the info. 

    Cheers.


  • Exactly. Like I wrote before, this feature is mainly meant to be used for fine-balancing the perceived width and/or balance of a stage-setting. In cases where the Mic position lies within the HotSpot area, rotating the mic array opens interesting new perspectives within a hall, too. A good example is the "drummer's position" near the bass trap in the corner of Studio Weiler. :-)

    All the best,

    Image


    /Dietz - Vienna Symphonic Library