Vienna Symphonic Library Forum
Forum Statistics

180,818 users have contributed to 42,142 threads and 254,366 posts.

In the past 24 hours, we have 2 new thread(s), 4 new post(s) and 75 new user(s).

  • "Never mix with headphones" What about 3D audio headphones?

    Anyone have any experience with mixing on 3D audio headphones?  Or at least checking their 9.1 mix for their VR gaming score?

    What's the scoop?  Worth a couple hundred bucks or is it just better to only check open ear?


  • (Sorry, personally I can't add any meaningful information, as the only "surround"-headphones I know are the ones my youngest son uses for computer games, and from an audio engineer's POV they are simply ridiculous - even though they were supposed to be top-of-the-line. Can't remember the brand, which might be a good thing. ;-) ...)


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:

     and from an audio engineer's POV they are simply ridiculous - even though they were supposed to be top-of-the-line. 

    Can you expand on this a little?  You mean the components aren't good, the sound is really colored and lumpy?  Or that you'd never mix an orchestra with the violas placed underneath your feet so it's not practical? haha


  • I think what Dietz wanted to tell you is, that specific (and probably expensive) Headphones for a Surround-mix were „a nice try“ which did not succeed in the market. What seems to become a much bigger topic is the headphone 3D-mix on non-specific/ normal Stereo-Headphones in combination with a specific Software which makes a mix possible. 80% of all people in the world are listening to music with normal Stereo-Headphones! Therefore it is more logical to find ways to make 3D Surround accessible to those 80% without forcing them to buy specific Surround-headphones. One example is this Software „Spatial Audio Designer“ which allows to combine all kind of Formats from Stereo to 22.2!!! It works DAW-independent, what means that you could even use it with Non-Surround DAWs like Studio One or Ableton Live. (There are of course competitor products available ... but this is one of the best solutions) http://www.newaudiotechnology.com/en/products/ „Never mix with headphones“ is “old thinking“. The SPL Phonitor is (for years) a good example for a hardware-Monitoring-solution for Stereo Headphone-mixing which works for many people outthere. Greetings Lars

  • last edited
    last edited

    @LAJ said:


    80% of all people in the world are listening to music with normal Stereo-Headphones! Therefore it is more logical to find ways to make 3D Surround accessible to those 80% without forcing them to buy specific Surround-headphones.


    „Never mix with headphones“ is “old thinking“.
    Greetings Lars

    Yeah and most of the other 20% are listening through their Echos, Alexas, or just their phone speakers.  Of course you still have car stereos and smart TVs but content going TVs are usually mixed for surround sound.

    I deal with a lot of music libraries and I've noticed that more of my music passes the Quality Control standards when I mix those tracks with headphones.  Many of my tracks mixed through monitors are rejected because of "sound quality" which usually means that they didn't like the mix.

    When the Quality Control personnel are screening tracks using headphones, it all makes perfect sense.


  • If a mix sounds good on a properly designed studio monitoring system, it will sound fine on cans or buds, too. That's not necessarily true the other way round, though.

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • My 2 cents:

    I recently moved to a small flat where I couldn't make any use of my beautiful Yamaha HS8 as they need a certain distance from each other and from the wall.

    I made some research and ended up buying AKG 702, the next step would have been some kind of room simulation for mixing with headphones.

    Result: I can compose/arrange with headphones, I can sometimes eq/comp single instruments but whenever I have to mix and do stereo positioning ear fatigue and headache will eventually kick in.

    As I am not allowed loud monitoring levels in my place but I am allowed "some" volume I bought a pair of Genelec 8010: they won't reproduce anything below 64hz but they fit a common desk and they are quite accurate. I can still use the headphones to cross check the bottom end at any time. I wouldn't personally invest on headphone related technology.

    Sincerely

    Francesco


    Francesco
  • I've got quite a few ideas on the topic due to my academic research interests. So here are a few of the best options ... 

    The best solution for immersive sound reproduction on headphones is recording your personal binaural impulse responses (PRIR) of different loudspeaker positions by inserting a miniature microphone inside your ear (i.e. DPA or soundprofessionals). These PRIRs can later be applied to any audio signal and thus allow the simulation of an unlimited amount of speakers (depending on how many positions you recorded). I've delevoped a semi-automated system that does this and it sounds 100% like listening without headphones - I also use these for academic experiments BTW. Subjects usually can't tell the difference between speakers - headphones.

    A commercial product exploiting this paradigm (, which was originally developed by Italian researcher Angelo Farina) is the Smyth Realiser (either A8 or the forthcoming A16). It does the same thing as described above, but will also allow you to personalize BRIRs of other people's ears to your own HRTF characteristics (A16 only).

    Further upcoming products make use of paradigms involving synthesized HRTFs. The Finnish developer IDA Audio (http://idaaudio.com) recently announced a partnership with Genelec. They use 3D scans (it also works with video captured on a smartphone, which is later subjected to photogrammetry) of your pinna and torso and run simulations that mimick BRIR recordings done in an anechoic environment. The advantage is that you can virtualize as many speakers as you wish - and place them in any kind of virtual room (requires additional software, e.g. IRCAM Spat). The downside is that accuracy depends on the quality of the 3D scan and their simulation algorithm. Others, such as 3Dsoundlabs, go a similar route by utilizing photogrammetry with photos. Rather than simulating an HRTF, these solutions devise statistical models that rely on real-life measurements taken on a few hundred subjects. The larger the database, the better fit their model can achieve to accommodate your individual ear anatomy.

    THX recently announced an immersive audio platform (http://www.thx.com/blog/thx-announces-end-to-end-positional-audio-solution/), which appears to include HRTF personalization functionalities. The same goes for Yamaha with their ViReal platform - it's part of a wider Yamaha strategy that will also involve music production platforms - that's why Steinberg will announce adding VR audio functionality to Nuendo/Cubase at GDC 2018.

    Creative is also working on a consumer-oriented product simulating HRTFs, though, it looks like their main target audience are gamers (as of yet little is known about that product, but it may be similar to 3Dsoundlabs approach (and I would certainly favor the latter for professional applications).

    There are a few others in the pipeline, but the above mentioned are already available (or will be in the very near future). Of course, the ultimate solution is to record PRIRs - whether this is done in a studio room or in an anechoic room, which yields the potential for further customization due to this representing your raw HRTF - though, only theoretically due to issues with reflections from sound sources (speakers) and other flat surfaces that may affect a dry signal path. But rest assured - standard PRIRs (i.e. 32 or 64 channels) are good enough.

    Hope this was of any help!

    Best,

    Hans-Peter


  • A little off-topic:

    Oh - and if any of you are interested in recording immersive 3rd-order Ambisonics, I strongly recommend the Zylia ZM-1 microphone (http://www.zylia.co). I've used it in two opera recordings as the main mic (Center in a Decca tree) and apart from some great spatial sense, you gain the ability to alter microphone directivity, polarization during post-production. This allowed me, for example, to focus on the singers and remove the orchestra from the mix (with some post-processing in ADX Trax) - and it compared favorably to the closer spot microphones used in the second setup (i.e. 16 channels; using some high-quality mics such as the U89, CMC6, KM184).

    Reason I mention this: Apart from recording live-events and orchestras, the microphone is perfect for capturing impulse responses of a room, which can later be simulated on headphones by filtering the signal through your individual HRTF. Sounds great!


  • last edited
    last edited

    @Musicmaster said:

    [...] the microphone is perfect for capturing impulse responses of a room [...]

    ... which also the reason why MIR is entirely based on Ambisonics recordings and processing since the very beginning of its development. 😊


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:

    ... which also the reason why MIR is entirely based on Ambisonics recordings and processing since the very beginning of its development. :-)
    I know and love it - though, it's a shame that MIR offers only traditional channel-based formats rather than the raw Ambisonics render. Perhaps a future version of MIR will come with SOFA support? If I may ask: What order were the IRs of MIR recorded in?

  • last edited
    last edited

    @Dietz said:

    ... which also the reason why MIR is entirely based on Ambisonics recordings and processing since the very beginning of its development. :-)
    I know and love it - though, it's a shame that MIR offers only traditional channel-based formats rather than the raw Ambisonics render. Perhaps a future version of MIR will come with SOFA support? If I may ask: What order were the IRs of MIR recorded in?

    Some nerdy ramblings about this topic...

    (1) I would have guessed that ambisonics is only used to represent multi directional sound *sources*.  Otherwise I don't see how the impulse responses would accurately capture the characteristics of a microphone's positioning and polar pattern.  I would also guess that room reflections and spatial precision would get smeared if lower order ambisonics was used to represent microphones.  On the other hand, it is possible to rotate the microphone positions in the software...

    (2) Also there is a "quadro" microphone option in MIR.  Is there any chance that may be oriented properly in 3-D so that the 4 channels can be converted into first-order ambisonics?  But the icon looks asymmetric and maybe suggests it's not.

    (3) One thing I haven't heard people mention is that headphones can be very misleading about the perception of dynamics - which in turn really distorts the perception of space and depth.  I found that my mixes tended to translate oddly flat sounding and too dry on monitors.  There's something about monitoring with speakers that doesn't have the same issue translating to headphones.  Something about the way a room adds subtle early reflections and diffusion that helps us perceive dynamics, perhaps.  I bet this problem would still exist on most 3D headphones, unless using a very high order ambisonic format with personalized binarual rendering.

    Cheers


  • last edited
    last edited

    @Another User said:

    If I may ask: What order were the IRs of MIR recorded in?

    MIR recordings are based on tetrahedral microphones and get converted to Format-B before implementation.


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @suon said:

    [...] (2) Also there is a "quadro" microphone option in MIR.  Is there any chance that may be oriented properly in 3-D so that the 4 channels can be converted into first-order ambisonics?  But the icon looks asymmetric and maybe suggests it's not.[...]

    I'm not sure that I understand the question ...? The Quadro array presented in MIR Pro's Output Format presets is a  fully virtual one, and you can change its symmetry in every way you want to, of course.

    Re: "3D": It's true that the z-axis (i.e. capsule-tilting) is not offered by MIR Pro's Ambisonics decoder due to the fact that 3D audio hasn't been _that_ popular amongst users until just recently. 😉 In principle it would be easy to change this because the information is there anyway; we just miss the UI elements and storage rules to make "immersive audio" become a MIR Pro-feature.

    ... but like I wrote in my previous message: There's no development man-power available within VSL for updates like that, right now ... 😕

    Kind regards,


    /Dietz - Vienna Symphonic Library
  •  

    Thanks Dietz, your terminology is much more precise than my wording =)  When I wrote "any chance that may be oriented properly in 3-D so that the 4 channels can be converted into first-order ambisonics", it was my inarticulate way of asking whether the quadro position might already be a tetrahedral mic arrangement and then in someone's DAW or postprocess, they could convert it to B-format.

    But you answered my question in your replies already.  I didn't realize microphones were represented with ambisonics too, I checked the manual again too and realize that I misread it before.  Looking forward to what MIR may be able to do in future updates.