On a more serious note, I had discussions with the product manager of VSL about this topic and
new improvements can only be made if some creative person sits down and finds clever ways of recreating
a piano. For example, it is common to sample each note by itself, because then every sample produces the sound. That makes sense, for example letters create a word, atoms create an element, etc., but the problem of course is that pressing 5 sampled notes of a C major chord and actually recording the sound of that C major chord give different results. Ideally, one would have an infinite number of recordings that capture every possible situation than can occur. For example, if I were to play Chopin Op.10/1 on a vst, it would recognize that I start with a C major chord and then output that sample etc. Of course that's impossible to do, but I think it's kind of interesting that apparently nobody ever tried to at least sample, say, intervals. If I play a C octave, I get a sample of that and not the two individual notes. Well, this dramatically increases already the notes that have to be sampled. In theory, there are 88C2=88*87/2~4000 intervals one could play. Even reducing to a limited set which would be used most likely, it's a couple of hundred intervals--- we're not talking about velocities at this stage at all, let alone pedal and una corda. But if in 20 years there is a way to sample a piano incredibly fast, then I believe this direction would improve the sound a lot. The next step is of course chords, and the amount of data and time increases exponentially. Scales are another problem, how does the vst recognize that I am playing a c major scale for example? The output has to be without latency of course, because I am playing live.