Vienna Symphonic Library Forum
Forum Statistics

182,331 users have contributed to 42,219 threads and 254,756 posts.

In the past 24 hours, we have 3 new thread(s), 9 new post(s) and 51 new user(s).

  • MirPro3D, I wanna try immersive sound!

    Alright...  I'm trying to understand the general gist of how MirPro3D would be used.  I want to bathe in 3D sound.

    But here's the thing, I don't have a big physical surround setup.  If I want to hear MirPro3D in all of its glory, I will need to use Binaural monitoring.  I saw what you said about getting dearVRMonitor, which I will look into, its not cheap at $250.  There is also another product for $150, which is also not that cheap, but less: https://www.noisemakers.fr/product/binauralizer-studio/

    Too bad MirPro3D couldn't have a stereo binaural monitoring mode (hint hint).  But anyway, I'm still a little confused about what I need to do in order to set everything up so that I can compose at home with headphones and hear full 3D sound from MirPro3D.

    Please correct me or validate me..I assume I would need to use a surround format in my DAW that supports the Z axis somehow.  Digital Performer has 5.1, 7.1 and 10.2.  It doesn't have atmosphere yet I don't think or other in betweener formats.  So I *THINK*, what I need to do is setup DP as 10.2...and hopefully MirPro3D will recognize that format and provide appropriate OutputFormats to match that.  If not I guess I will have to learn about the stuff mentioned below in order to create an appropriate output Format.  Right?  Then once its coming into DP in that format, then I would use one of those binaural monitoring plugins on my master out to convert it back down from 10.2 to binaural stereo and hear the lovely 3D sound in my headphones...presuming that plugin also supports 10.2.  

    Am I understanding that process correctly?

    I understand I think pretty well how 3rd order ambisonics essentially provides 16 virtual loudspeakers around the sphere...and that all the magic happening in MirPro3D is handled through various ambisonics processing with that level of accuracy..  Fine so far.

    The OutputEditor is going to take me a while to understand fully...if ever.  I get how you can select each capsule and see the 3D picture showing the virtual mic's spatial response.  I got a little lost trying to understand what the "Ambisonics Order Blend" does exactly.  Also the virtual loudspeakers that are there, what is the point of defining those if the final output has to get back to whatever the DAW is setup for?

    I get how MirPro3D picks up the DAW's current surround setup, and uses that for how it will decode all the ambisonics stuff back to classic multi-channel surround combinations.  But the virtual loudspeaker section has more loudspeakers then there are actual outputs...so lost again...  also what the coefficients do in any of this or how or why I might want to mess around with creating new ones?.  

    But I can see how the virtual loudspeakers can be positioned above ear level pointing down at angles, etc... to create a fully 3D listening environment... but how that translates to my DAW's surround setup...or how I might want to tweak any of those things...I need a better explanation.


  • Logic Pro 10.7.4 Dolby Atmos plugin https://support.apple.com/guide/logicpro/set-up-your-project-for-spatial-audio-mixing-lgcpbc1e7157/mac You have Logic right?

  • just reading about that now...  That may be what I have to do until MOTU can get up to speed with Atmos...  or at least I can export bounced stems from DP over to LogicPro for mixing into atmos and binaural monitoring...


  • Right. I’m driving logic from Dorico over IAC.

  • last edited
    last edited

    I'll try to sort it out a bit. 8-)

    Just to make one thing very clear: Dolby (the company) has once again managed to make one of their proprietary products synonymous with a certain way to handle audio. People were constantly talking about "Dolby Surround" when they simply meant "multichannel", and they say "Atmos" when they are just talking about 3D audio. But there are quite a few "best practice" methods to deal with 3D audio - open source or closed. Atmos is only one of the latter, and quite frankly, it's not even the best choice for making music (in my very personal opinion).

    3D audio is basically very simple to achieve and straight forward to handle. A tried-and-tested setup is a so-called 5.1.4 array, which is just the typical circular left/center/right/left surr/right surr speaker/(LFE) in the lower plain, and a more or less identical circle somewhere 30° to 45° above the listener's head (minus Center and of course minus the LFE, which is useless for music in 99% of all cases anyway). The beauty of this format is that you can actually record in it (e.g. Synchron Instruments!), and that you don't need any overly fancy devices to create it from individually recoded tracks either. It's just routing and/or panning between speakers. (Forget about Dolby's moving "Objects" for now, they are just causing trouble in our little world of music creation). And room, of course. 😉

    Enter "Binauralisation": A setup like the one described above can be virtually reproduced on conventional stereo headphones to a certain degree, using clever psycho-acoustic tricks and something called "Head-related transfer function" (HRTF). This "model" of a human head is the decisive part: The better it matches your physical appearance, the more convincing the perceived effect will be. Implementing a "binaural encoder" to your DAW couldn't be simpler: You will most likely have a "3D Mix Bus" created somewhere in your project, like you would use a "Stereo Bus". Here you will insert the binauralisation processor (i.e. a plug-in), which will then output the virtual 3D (for headphones) like described above. Maybe you will find the possibility to load a personalised HRTF (e.g. Genelec's "Aural ID"), or special linearisation-EQs for certain headphone models (e.g. Dear VR Monitor), but that's about it.

    Sidenote: The available binauralisation plug-ins mostly differ in the quality of their "generalised" HRTF. We at VSL just happen to like the one by Dear VR Monitor a lot, but there's nothing wrong when you prefer the one that comes with your DAW or freeware offerings which come mostly from an academical background.

    ... now you just have to route this signal to your cans, and you will listen to your mix in 3D! 😊 BTW: This of course is also valid for plain surround or even stereo mixes. Binauralisation first and foremost tries to get rid of the dreaded "in-head-imaging", and of course could be useful for trusty old stereo, too.

    A final hint: Be aware that you have to mark any mix you print _with_ that processing as "Headphones Only" (or something like that), because it will sound pretty strange on speakers. *yikes*

    Oh, and BTW: As you will understand now, it makes little sense to have binauralisation available _within_ MIR 3D directly, when it is your final mix that has to go binaural. 😉

    Enjoy!


    /Dietz - Vienna Symphonic Library
  • PS: VSL's support team is working on a series of tutorials for MIR 3D (videos as well as exemplary DAW projects) that will become available by and by during the next weeks and months.


    /Dietz - Vienna Symphonic Library
  • Thanks Dietz for a detailed reply its much appreciated.  I think MirPro3D is a remarkable update and can't wait to hear immersive sound.  MirPro3D is really the ultimate 3D panner plugin!

    I have been wanting to setup surround setup in my home studio, not because I need to ship surround product.  I am just a humble hobbiest I don't have to ship ANY product.  Its only about pleasing my own senses really.  I was simply going to add two rear speakers to my humble home studio so that I could at least have some 2D sound.  

    Alright I am getting an Atmos Sonos system in the living room, so maybe playing something back on my TV in the living room with full atmos could be enjoyable eventually too, impress some friends and family.  or perhaps eventually ship some kind of encoded MP4 that friends and family could playback on their home sonos system (or playback through Apple AirPods).  Like it or not, that final output format is likely going to be Dolby Atmos.

    I think the vast majority of hobbyists will not have even 5.1 monitoring systems in their home studios, much less 5.1.4.   Even many working composer pros, will simply not have actual 3D speaker configurations in their home studio.  It will have to be binaural monitoring for us all the way if we want to hear an orchestra played back with 3D reverberation from MirPro3D.

    In some way it would be hypothetically superior to translate directly from 3rd order ambisonics direct to binaural encoding, with full spherical accuracy...if you know what I mean.  not to mention that my daw, Digital Performer, does not currently have any mixing formats that really support 3D sound.  It only has 5.1, 6.1, 7.1 and 10.2 whatever that is.  They are behind the times it seems.  The only reason LogicPro is coming up is because it does support 7.1.2 (or maybe 7.1.4?) as a mixing format, not sure right now about 5.1.4, I will have to look into that.  But for myself I will never have 5.1.4 speakers in my studio.  At best I will have 4 speakers and many times not even that.

    But anyway, I also now realize that its pointless to bring binaural back to the daw from MirPro3D if you intend to do any more mixing with it..as I think any further processing of those binaural tracks would probably destroy the binaural encoding itself...  So maybe that was a dumb idea after all.  

    Ideally we could have the entire DAW in ambisonics mode from end to end...and then convert to the final output format from that...whether that be atmos, or some specific speaker configuration or binaural.  That way the full spherical information would be retained all the way through the mixing process and carried into the final output encoding.  

     But its also pointless now in 2022 to hypothetically consider DAW's being full ambisonics all the way through the mixing chain like that.  Maybe in 10-20 years eh, but not now.  So I can see why from a practical standpoint we have to mix in some kind of non-encoded multi-channel format that a DAW can make sense out of it for plugins to work properly, for panning tools, etc..  then re-encode it back to either binaural or atmos....(or ship some other format as requested by a client).  if you think about it even a direct ambisonics-to-atmos encoder would be a much better thing to ship if the client ultimately wants atmos.  The selection of a speaker system is determined at playback time as I understood atmos to be.  But the problem is that our DAWs cannot handle mixing ambisonics, nor atmos nor binaural...they fundamentally need to be mixing in one of the folded down speaker configurations.

    So anyway, my DAW doesn't support any 3D speaker configurations yet.  Maybe 10.2 is?  I'm not sure because my current computer doesn't have enough actual audio interface outputs and DP won't even let me setup a mix at 10.2 unless I have enough physical audio outputs to represent 10.2, which I currently don't.  So I can't even try it.  I am going to try LogicPro later though, because with the Atmos addition they also made sure it can support 7.1.2 or 7.1.4 mix down, even without actual speakers connected in the studio.  Then a built in plugin that can re-encode it on the master bus back to binaural so that I can monitor it in my humble home studio.

    I actually don't know of any other DAW that provides the capability to encode to binaural on the master bus without a third party plugin...I will look at Cubase12 tomorrow to see what it can do in terms of mixing formats and maybe a binaural output plugin.  

    Anyway I was also curious and confused a bit about MirPro3D's ambisonics order blending control and what that does exactly and how the virtual loudspeakers are used in MirPro3D while translating back to an actual mixable output format such as 7.1, 10.2...the best that are currently provided by Digital Performer.


  • Even "planar" surround is much more fun than stereo, believe me. :-) 

    The format 10.2 you mentioned is most likely just that: 10 speakers in the circle, and two LFEs. I suggest that you start with a less demanding setup like 7.1 and take it from there. As long as you instantiate MIR 3D in this channel format, the engine should select a proper Output Format all by itself. If you want to test different approaches to this format then there are alternatives in the pull-down in the Main Mic Selection panel on the right. Don't feel obliged to "roll your own" Output Format in MIR 3D's respective editor. As long as you don't really know what you're actually longing for, chances are that you're overcomplicating things quickly. As soon as you know what you miss, you're ready to work on the finer details here!

    3D is great, but you won't miss the top layer speakers as long as you didn't get used to them. ;-)

    HTH,


    /Dietz - Vienna Symphonic Library
  • I'm going to be hitting on MOTU to update their software, what mixing formats in particular should I try to make sure they support to best s support the built in output formats of MirPro3D?  I presume 5.1.4 since you mentioned it, or perhaps 7.1.4 since LogicPro is using that one and MirPro3D already has some preset output formats in 5.14, 7.1.2 and 7.1.4.   I guess if I ultimately export a Dolby Atmos creation to play back on my home Sonos system, its probably 5.1.2 or 7.1.2 that would best represent what is actually going to happen at playback time through a typical home system which has a sonos bar and sub in front and two surround speakers in the back.  The sonos bar has a few speakers angled up I believe to mimic having front speakers up high...something like that.  But in the rear, most people including myself, if anything just have a couple of rear speakers...often times in the ceiling.  I do think that more and more people in the future are going to listen to tunes on Spotify using Apple Air Pods..that will be able to translate Atmos into the AirPod spatial thing.  I think probably using LogicPro's 7.1.4 or 7.1.2 format makes a lot of sense..regardless of the fact that I will have to monitor at home using binaural headphones while producing it.

    Another question is, what it is the reason for the stereo and planar downmix output formats?


  • last edited
    last edited

    @Dewdman42 said:

    [...] Another question is, what it is the reason for the stereo and planar downmix output formats?

    I'm not sure that I really get the question, sorry ...?


    /Dietz - Vienna Symphonic Library
  • why do those down mix output formats exist, and when should we use them?


  • I just read up a bit about what Cubase12 can do.  Holy crap that must be the 3D king of DAW's...its support fro 3D audio is very extensive and its going to take me quite a long time to figure out exactly what it can do...but among other things, besides supporting all the mixing formats we have talked about so far like 5.1.4, 7.1.2, 7.1.4, etc..  it also supports mixing directly in Ambisonics!!!  It will take me quite a while to figure out how it all works before I can say anymore about how that works, but I did try already to at least insert MirPro3D plugin onto a 3rd order ambisonic track in Cubase..  the plugin showed 16 meters.  Fine so far!  I opened up MirPro3D and there is an output format called RAW 3rd order ambisonics.  I selected it and the mic array became some kind of non-mic sphere..it skips any mic virtualization I guess and just passes the raw ambisonics from any direction directly back out to Cubase!  

    Cubase also provides numerous ways to monitor including built in binaural encoding.

    Alright conceptually that looks interesting, but like I said, I have a lot more to figure out before its going to be working right to say any more, but that looks to me like I could literally use Cubase's binaural encoding to basically skip any virtual speaker fold downs and hear exactly the ambisonic representation of MirPro3D's rooms...in binaural headphones as best as they are capable of translating.  This may or may not  cause my CPU's to explode, so we shall see...

    But Anyway, what I can say is between Cubase, LogicPro and DP....Cubase has WAYY more stuff related to 3D audio, Dolby Atmos, Binaural monitoring and even ambisonic mixing.  You can even mix directly in Dolby Atmos, dealing with Atmos objects instead of conceptual virtual loudspeakers.  I think I would have to take a class to understand how all that works though...so we'll see if I get anywhere with this, but just wanted to report what I found.


  • last edited
    last edited

    @Dewdman42 said:

    why do those down mix output formats exist, and when should we use them

    I'm still not sure I really understand the question, but I will try to answer it to the best of my abilities:

    I found out that even though full "spherical" decoding from Higher Order Ambisonics is extremely realistic in 3D, it can also be seen as some kind of glorified "multi-mic" recording. The ruthless sound engineer in my head thought, "Well, we do great-sounding downmixes from real surround- and 3D-recordings, why not from spherical decodings, too?" ... and after a few seriously failed attempts 8-) I developed several approaches that made a full 7.(1*).4 sphere sound great in simple 5.1 or even stereo, too. The trick is to use the simplistic, built-in Matrix Mixer in MIR 3D's Output Format Editor. 

    ... these settings will give you very different results than our "old-school" capsule-based decodings. The most obvious difference ist that you can make good use of stage positions on the hard left and right sides of the Main Mic without sounding strange (even positions in the back will work surprisingly good, sometimes). Their downside is that the frontal center position seems to lack a bit of "grip" in these setups, that's why I often mixed in an additional capsule-array for the dry signals as support.

    EDIT: Funny sidenote: Our ingenious Ambisonics development partners from IEM at the University of Graz were pretty sure that no one in their academic circles had done this "downmixing" before - this concept lacks any kind of scientific purism. But you know - if it sounds right, it is right. 😄

    Does this answer your questions ...?

    *) PS: The brackets around the LFE channel in formats like 7.(1).2 stands for the simple fact that it takes care for the routing of the LFE without actually using it. Makes the integration in the DAW a bit easier than the seemingly more "logical" 7.0.2 in some cases.


    /Dietz - Vienna Symphonic Library
  • Wow cool! Yea I think the word “downmix” was causing me to think it refereed to a situation where *I* would be needing to downmix something, that’s why I was asking. But I see now you were simply naming them that way to point out that you figured out a cool mixing technique which just happens to sound really good. If I am understanding correctly, their function and purpose are the same as the ones without “downmix” in the name, but just a different sonic result by using your techniques to achieve stereo or surround, etc. I will explore that later when I have an actual way to hear 3D monitoring somehow

  • Something else I want to say that I was pondering last night. So the prospect of being able to monitor the full spherical beauty of 3rd order ambisonics through binaural headphones is intriguing to me. It might be the closest we can get to replicating the experience of being actually there in the room standing there at the mic position listening. However that being said, that is often not the sound we want to hear or are accustomed to hearing on a recording, for films or otherwise; we are accustomed to hearing orchestras that were recorded in a room with some kind of mic array, and not usually any kind of binaural mic setup either, but rather with mic arrays that enhance stereo or enhance surround or enhance this or that thing to create actually a different sound then what would be heard if you were standing there. Similar but different. I think it’s intriguing to think about using the ambisonic output format to try to hear what may be the closest experience to actually being there, maybe; but that is probably unlikely to be the best desired sound for a recording. The various mic array options in mirpro let us get those different impressions of the room that translate well for recordings and how we are accustomed to hearing recordings, the downmix versions are just getting even a bit more sound engineering then the mic array alone, to enhance the final sonic image through mixing techniques, compliments of Dietz and his years of engineering experience, translating the mic arrays even further through the matrix. I totally get it. I will have to explore all of these later and read carefully the notes Dietz added to each one.

  • last edited
    last edited

    @Another User said:

    I will explore that later when I have an actual way to hear 3D monitoring somehow

    Just to avoid possible misunderstandings: You don't need anything fancy for a stereo downmix of a spherical decoding from HOA. It's just that: stereo. 😊

    Enjoy!


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dewdman42 said:

    [...] I totally get it. I will have to explore all of these later and read carefully the notes Dietz added to each one.

    Thanks for the friendly words! You're definitely on the right track. 

    Just to keep things in perspective: I may know a thing or two about audio engineering and music mixing, that's true. But please keep in mind that HOA and its intricacies are themselves an area where even I don't have "many years" of experience, just a little over two. MIR 3D as a production-ready application is even younger than that. So we're all here really just scratching the surface so far.

    The best is yet to come, I'm sure. 😊


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:

    You don't need anything fancy for a stereo downmix of a spherical decoding from HOA. It's just that: stereo. 😊

    One strange question that occurred to me this morning.  So basically you're saying that the "sphere" of ambisonic information is there in the MirPro3D and we are pretty much always folding audio down to some number of outputs....could be stereo or surround or even 7.1.2 immersive...  But its always being folded down..its just HOW its folded down that will differ in the virtual mic array techniques employed.

    But here's the new question...  While MirPro3D has encoded a 360 degree sphere of information for any XYZ point in space where the mic array is placed...  When we play it back eventually on some system that supports say 5.1.2, we don't have any speakers to represent the bottom half of the sphere.  We have ear level, and we have above the head level...but unless I'm missing something, we don't have anything to playback sounds below us.  

    How does MirPro3D translate sounds from below the mic array into playback systems, which really are only half of a sphere if you think about it?  If I'm understanding that right, which I very well might not be.

    I guess one could assume that it gets folded into the ear level speakers in some way, but it makes an interesting case for not putting the mic array too high in the air as that would squash a large amount of sonic information Ito the ear level rather then below the mics as it actually is.  Am I making sense?


  • Very good question! This is where the so-called "coefficients" come into play, i.e. the data we we refer to and load in the lower half of the Output Format Editor. Call them "virtual speakers", if you like, which can, but don't have to relate to physical speakers.

    To avoid even more complexity in an application that might be overwhelming already we decided to leave at least _that_ part of the equation to the specialists of our academic development partners at IEM Graz. They offer a fantastic, Ambisonics-focused software suite (freeware!) that contains the tool we use to create our sets of coefficients. 

    ... explaining the whole concept is too much for a little forum positing. Please continue reading here: 

    -> https://plugins.iem.at/docs/allradecoder/

    All of this is ongoing recent development, so we might see/hear even better "coefficients" in the not-so-distant future. :-.)

    HTH,


    /Dietz - Vienna Symphonic Library
  • Thanks so much Dietz & Dewdman for this very instructive conversation. The fog has cleared. You have addressed the struggles I was having (in a previous thread) until I realized the Dolby-Apple Logic side of the equation was causing all the confusion in converting legacy MIR projects to the new 3D and, like Dewdman, was anxious to “hear” the new MIR itself – without having to assign near, mid & far in the Atmos plugin…I have no clue what that would do to the MIR audio. The coming tutorial videos will no doubt make things even clearer. But for now, I have opted for the Dear VR Monitor solution which seems to be able to render MIR Prod 3D’s environment as intended. Thanks again for your thoroughness.

     

    PS. And Deadman, I have appreciated & well used your VSL AU3 Logic templates. Many thanks.