Vienna Symphonic LibraryCompany Logo
  • Products
    Synchron
    • Synchron Series
    • Synchron Pianos
    • Big Bang Orchestra
    Starter
    • HELLO Free Instruments 🔥
    • Synchron Prime Edition
    • Special Editions
    • Smart Series
    Software
    • Vienna Ensemble Pro
    • Vienna MIR Pro 3D
    • Vienna Suite Pro
    • more...
    VI Series & More
    • VI Series
    • Freebies
    • Vienna Voucher
  • News
  • Music
  • Forum
  • Academy
    Instrumentology
    • Strings
    • Brass
    • Woodwinds
    • Percussion
    • more...
    Discover Strings
    • Violin
    • Cello
    • Double Bass
    • Harp
    • more...
    Discover Brass
    • Trumpet in C
    • Horn in F
    • Tenor Trombone
    • Bass Tuba
    • more...
    Discover Woodwinds
    • Concert Flute
    • Oboe
    • Clarinet in Bb
    • Bassoon
    • more...
  • Support
    Software Manuals
    • Vienna Assistant
    • Vienna Ensemble Pro 7
    • Synchron Player
    • Synchron Piano Player
    • more...
    Instrument Manuals
    • Big Bang Orchestra
    • Synchron Collection
    • Special Editions
    • Changelogs
    • more...
    Tutorials & FAQs
    • Installation iLok
    • iLok Video Overview
    • Sibelius Integration
    • FAQs
    • more...
    Company
    • About Us
    • Team
    • Press Area
    • Contact
    • Send a Message...
  • en|de
  • Toggle Light/DarkMyVSLMyProfile
    Login
Welcome Guest! To enable all features please Login or Register.
  • Forum
  • Active Threads
  • Search
  • Help
  • Login
  • Register

Notification

Icon
Error

OK


> FORUMS > Search
Search
Search for
Posted by
Forum
1.Vienna Imperial 6/3/2009 10:56:14 AM
cm wrote:

this is a principle for sample player engines - since only the sample headers are loaded into memory the allocated space is locked - at least the engine is telling that to the os ;-) this memory space is then used for buffering data (as soon as you start to play a certain sample this buffer gets filled up from the harddisk with consecutive data)

At least it should be a principle, but sadly I have seen otherwise (Ivory). This leads to the effect that one needs much more RAM than strictly necessary to prevent the machine from paging, or turn off the pagefile.

cm wrote:

to which recommended machine do you refer now? here are the system requirements

IIRC on the 2.5 GHz core2duo the Imperial has been played by default at 128 samples latency ... depending on system, audio device driver quality and settings you could go down to 64 (on rare occasions even 32) or need to increase

This requirements are quite modest, so I might have referred to them ;-)

64 samples is acceptable, 128 is a bit too much for my taste, it starts to feel sluggish. It is good to know that you have seen setups with 32 samples, even if it is not the common case.

2.Vienna Imperial 6/3/2009 10:05:20 AM
cm wrote:
ps: someone asked if the Imperial can be run on a 2GB machine ... not easily ... a very downsized XP32 2GB RAM did load the default preset immediately after boot, otherwise not enough free memory is available

This sounds like you are locking down the pages in memory, which is a good thing. This means once it is loaded, it will never get paged out. So I can assume that if it loads it will run.

May I ask again at which latency you are able to run it on the recommended machine without clicks/pops?

3.Vienna Imperial 6/2/2009 8:04:20 PM
ct1961 wrote:
Saying that Julian, with the relatively low cost of a 500GB hard drive these days( got the latest barracuda for under £50) there can't be any good reason for VSL to compromise on quality surely?

Low cost for Harddisk space, yes. But the cost for distributing such a large amount of data as DVDs might be substantial. And juggling this amount of DVDs is horrible. One could ship HDDs instead or offer downloads (see above...)

4.Vienna Imperial 6/2/2009 12:44:46 PM
JSAntares wrote:
What is the ideal midi keyboard for Vienna Imperial? I'm not sure if mine is good enough for expressive purposes.

CEUS?  :-P

5.Vienna Imperial 5/1/2009 3:55:49 PM

Regarding the Demo: really nice dynamics :) Is there soft pedal involved?

I have some questions now that the Imperial finalizes:

  • What latency can I expect with a decent Computer? What is the lowest latency you observed without clicks and pops? My soundcard can go as low as 48 samples @96kHz, and Ivory is able to handle it flawlessly. Can I expect likewise?
  • Will the damper pedal be continous, i.e. is there something like 'half pedal' or 'quarter pedal'?
  • And one more philosophical question: Is the MIDI-velocity really sufficient to define the sound? Is there really no way to control the tonal quality through the way this specific velocity is being achieved? Now that you have the technical possibilities to capture these nuances, it would be really interesting to bring this to digital pianos. Of course MIDI would not be able anymore to transport this and a new generation of keyboards would be necessary. I'm really interested what the future will bring :-)
6.Vienna Imperial 5/1/2009 3:24:19 PM
Guy wrote:

So if this is feasible to market, how about one special string lib such as leg violins with 100 velocities? Geeked

The reason why the Vienna Imperial is possible is, iirc, that there exists a piano that can be actuated by a computer with very high precision, otherwise it would not be possible to record 100 distinguished layers. To do this with a violin, you have to find a way to achieve something similar. I never heard of a computer actuated violin...

7.Vienna Imperial 4/25/2009 3:10:31 PM

Are you offering me a preview for the Imperial? Go for it :D

8.Vienna Imperial 4/24/2009 8:18:09 AM
cm wrote:

 no - you wouldn't want to download ~25 GB ;-)

Why not? I wouldn't mind downloading any size if that's make it a bit cheaper. E.g. my Kabel Deutschland modem has 32MBit/s, that's about the speed with which my DVD drive can copy data. It would even save me the torture of DJ-ing all those DVDs!

So, even if it's 250GB, a download version could be an option.

9.Flash-Disks? 2/25/2008 8:26:03 AM
PolarBear wrote:
If you actually really consider this idea I'd make my userbase a lot if not indefinitely larger by providing SSD benefits for all possible applications - you'd need to buffer/copy the first portion of every file to SSD to overcome HDD seektime and have a little RAID-like controller manage your files and managing read and write operations, e.g. in an encapsulated external bay.

Actually the drawback here is, that it wouldn't really work with monolithic files ;)

 

This is a nice idea, but hard to implement on device level. Normally the device doesn't know anything about files, it only knows about sectors. Of course you can teach your device how FAT32, NTFS, UFS, ZFS, but you can imagine how error prone that is. Also, if the user builds a RAID with these device, each device only sees partial filesystems.

A different approach would be to add a 'learn' button. As long as this button is held down, every read access is mirrored to flash. Otherwise the content is untouched.

This would lead to the following usage szenario: the user installs his libs to our drive. Then he starts his VST host and loads one instrument after the other, always pressing the button while the application reads the sample headers. This will give the drive exactly the information it needs, regardless if the data is organized in single or monolithic files.

Cheers,

Arne

10.Flash-Disks? 2/25/2008 7:20:42 AM
cm wrote:
arne wrote:
The only drawback I can see is that you have to wait 300ms after hitting 'play' before playback starts.

this would be unacceptable for a large portion of our userbase, at least for those who need to add tracks using a keyboard ...

 

Of course this wouldn't be a global option but a per-instrument option, so the user will be able to add tracks in the usual low-latency way.

11.Flash-Disks? 2/24/2008 8:44:25 PM
cm wrote:
arne wrote:
- you preload 64k for every possible stream. This gives you 350ms for the first HDD access

not really ... because of the nature of buffers harddisk access had to beginn after 175ms ... at latest ... if it would only start at 350 ms the buffer would be already empty. also dividing the buffer in two portions only was just an analogy to soundcard buffers (the one for output of wave data) to simplify the math in my example. the same would apply for any buffer filled by SDD, so its geting already rather complex when and how to switch the source without starting to think about the behaviour of threads handling all this.

 

...350ms to finish the first HDD access...

cm wrote:

our developers already optimized the engine in such a detailed way (including the monolithic data format as source) that only one plain priciple will lead to significantly more performance: much helps much - in this case speed.

christian

You should ask your developers if they thought about this. If I hadn't already have a good job I'd build a business case around these two ideas... 

12.Flash-Disks? 2/24/2008 4:25:19 PM

I'd like to go back to the motivation, that led to my initial proposal. My goal was to save Steffen from selling his car to buy RAM, because without a car he cannot give me piano lessons anymore.

If I understand him correctly he edits his arrangement in Cubase and plays it back to review the results. He does not play any instrument live, so latency does not matter to him. To save RAM you currently have two options:

a) bounce some tracks to disk so they have not to be rendered on every playback

b) only preload the notes/velocities he really needs.

Both approaches are a bit cumbersome and time-consuming.

Why not add a 'high latency mode', where nothing is preloaded? One could delay playback for about 300ms and use the time to load the samples needed. This way you need not chose which instruments/notes/velocities you need in advance, you'd just have your complete library at your disposal, without the need for a single GB of RAM.

AFAIK Cubase is able to compensate for the latency, so it should be possible to even play some Instruments live in the conventional way. The only drawback I can see is that you have to wait 300ms after hitting 'play' before playback starts.

Just a thought.

Arne 

13.Flash-Disks? 2/24/2008 1:36:31 PM
I see you're still not completely enthusiastic, so please give me one last shot ;-)
I made the mistake to name some numbers without knowing the real ones. So please take my '1, 10, 100ms' only as an example and replace them by the numbers you really use. My intend was not to increase the IO/s.
I'll try again: currently,
 - you preload 64k for every possible stream. This gives you 350ms for the first HDD access
 -  the HDD is accessed in portions of 32k every 175ms
change it to the following:
 - you preload 8k for every possible stream. This gives you 45ms for the first _SDD_ access. You consented reducing the preload buffer to 1/8 in case of SDD should be possible
 - read 2x32k from SDD to fill your buffer. This give you 300ms (given 40ms access time for the 32k and 10ms for the second) for the first HDD access
 - beginning from the third access, read from the HDD in 32k portions every 175ms as usual
So I don't want to add any accesses, just direct the first two to the SDD instead of HDD. There will be no additional I/O or CPU load.
Also you don't need much bandwidth to the SDD, because you need it for the first 64k of each sample only. Given a bandwidth of 50MB/s stated in an earlier post this would give you 780 events/samples per second, independant of the polyphony. For a massive polyphony you only need fast harddrives. Experimenting with the buffers might show that it is sufficient to read only 48k instead of 64k, yielding 1000 samples/s.
As a further optimization, as mentioned in my last post, you can start the HDD access right away and not only after the data from SDD arrived.
Cheers,

Arne

14.Flash-Disks? 2/24/2008 8:09:40 AM

ok, let me try to explain :-)

I have to admit that I haven't used  VSL yet, I only have experience with Synthogy's Ivory, but I guess the way it basically works is the same.

My understand how it currently works is as follows: When the engine initially starts, it reads the first part of each sample (of each sampleset the user has selected) into RAM, let's say the first 10ms. This amounts to quite some GBs. Additionally, the engine allocates a buffer for each polyphony level. These buffers can hold a much longer period, let's say 100ms. These buffers are cheap, because you only need very few of them compared to the amount of buffers you need for the first 10ms of each sample.

The moment a (MIDI) event arrives, the engine can start playing the sample right away, because it has 10ms buffered in RAM. It starts playing from this buffer. Simultaneously it allocates one of the 100ms buffers and directs the HDD driver to fill it with the sample data from 10ms-110ms. After the first 10ms have been played, hopefully enough data has arrived from HDD to continue from the larger buffer. This buffer will constantly be refilled from HDD until the sample ends or playback has been terminated. Afterwards the 100ms buffer will be released. If the sample is being played again, the data starting from 10ms will have to be fetched from HDD again. The 10ms are chosen in a way the HDD has enough time to respond.

Now the same in my 3 layer model:

At installation time, we copy the first 10ms of each installed sample to SSD. This data will stay there until the sampleset will be deinstalled.

At engine startup, the engine reads the first 1ms of each sample into RAM. Additionally it allocates the 100ms buffers as above.

The moment a (MIDI) event arrives, the engine can start playing the
sample right away, because it has 1ms buffered in RAM. It starts
playing from this buffer. Simultaneously it allocates one of the 100ms
buffers and directs the SDD driver to fill the first 9ms of the buffer with the sample data from 1ms to 10ms. Also, it direct the HDD driver to fill it with the sample data from
10ms-101ms. After the first ms have been played, hopefully enough
data has arrived from SDD to continue from the larger buffer. After 10ms the data from HDD will have arrived so that there will be no disruption in playback. After that, this
buffer will constantly be refilled from HDD in the same way as above until the sample ends or
playback has been terminated. Afterwards the 100ms buffer will be
released. If the sample is being played again, the data will have to be fetched from SSD and HDD again. The 1ms is chosen in a way the SSD has enough time to respond, the 1+9=10ms are chosen in a way the HDD has enough time to respond.

The model described here bases on how I would implement sample streaming on an first impulse, because I have never done it. Please correct me if some (or all ;) of my basic assumptions are wrong. 

cm wrote:
 

ps: directory management would become very complicated if we would spread sample data across flash and harddrive, since flash would be also accessed as drive/volume


 

let's say "interesting" ;-)

cm wrote:
 

pps: it doesn't matter if it is PPC, intel. sparc, alpha, windows, OS X, solaris, irix, BSD, linux, whatever .... sample streaming has it's rules everywhere ...

Sample streaming has, but latency has not. Clearly there are well behaved systems and systems that are not. From my experience Windows is  nightmare regarding latency.

15.Flash-Disks? 2/23/2008 7:58:40 PM

Hi,

cm wrote:

one of the most *expensive* processes on a computer is access, read and write data (using kernel time and adding load to the chipset), so such a 3-tier model would double this load (harddisk to flash, flash to RAM, RAM to processor, processor to audio device) and add another buffer (the flash disk) what usually also adds latency.

 

I feel you have mistaken my proposal a bit: I do not want to write the data from harddisk *through* the flash. The idea is to organize the data in a way that let's say the first 64k of each sample reside on flash disk, the rest solely on HDD. There won't be any writes to the flash disk.

Also because you only read a small amount of each sample from flash, you don't need that much bandwidth. The overwhelming amount of data still come directly from HDD.

There are no additional accesses and no additional buffers. At the moment you have 2 sources for the data, first RAM (very low latency), second HDD (high latency). Just add a third source, SSD (low latency).

Regarding latency: I cannot imagine that the access time to SSD is > 0.1ms on a well tuned system as long as the drive is not overloaded, but I don't have experience with windows there, I'm a SUN/Solaris guy ;)

I don't think an experiment with _all_ data on SSD does prove anything in regard to my proposal because in this case you'll run quickly into bandwidth limitation which you won't if you follow the procedure outlined above. 

cheers,

Arne 

Loading...

Icon
Loading Search Results...

  • Forums
  • Search
  • Latest Posts
  • Terms of Service
  • Terms of License
  • Privacy Policy
© 2002 - 2023 Vienna Symphonic Library GmbH. All Rights Reserved.
This website uses cookies to enable you to place orders and to give you the best browsing experience possible.
By continuing to browse you are agreeing to our use of cookies. Full details can be found here.