I've been following this thread with keen interest since it lit up. Great discussion! It intersects with some of my own interests. Macker, if it's not too complex a request to answer, I wonder if you might share some more details as to how you manage the relationship between vertical sonorities in just intonation whilst embroidering melodies in Pythagorean tuning and how that works in practice. If I can get a bit more detail it might be something to code into a plug-in type MIDI processor that could be used outside of the Logic environment and liberated to be active with other virtual instruments such as our VSL ensembles.
And on a light-hearted note... who cares if a performance controller that provides the "levers" allowing a more nuanced approach to tuning is not widely adopted? I for one have spent most of my creative life designing and building tools to support my practice that would likely not be of that much interest to the creative community at large. But... and this is a profound "but"... they work for me and I can do the things I want/need to do with them. Unless, of course, the goal is to invent the next big thing in performance interfaces and go into business to promote it. I haven't the time for that kind of career shift anymore! ;-) I think William might have been one of those who have suggested that I might turn my performance style system for selecting articulations—an articulation engine—into a marketable tool for others to use. I always respond the same... ask someone to pay for the thing you make, and they'll ask you to support their use of it. This is even more of an issue with things made of computational stuff. Just look at our heroic team of developers here at VSL and the constant stream of requests (to put it mildly) to fix what's broken or wonky. It's a full-time job. I'm content with my home-brew computational luthierie where it just has to work for me, leaving me most of the time for the creative work.
VSL is heading in the right direction with the model presented by the Synchron Player. The next step would be to add some intelligence based on some performance rules for articulation, dynamics, tempo, etc. that can be mapped to custom articulation sets, such as moving through the dimension tree based on the rules.
I've learned much by studying and adapting the work of Anders Friberg and others in developing my performance system.
https://pdfs.semanticscholar.org/a42d/f8c2cbb3fa304b79a5c36b510c412eaa8dbc.pdf
I'd love to be able to incorporate a more sophisticated approach to tuning. Hermode looks doable, although I too find the examples I've heard to be a lot more subtle than I'd hoped. Which brings us full circle to Macker's approach.