QuasiUnaFantasia Does this mean that MIDI level 1 is played at 48% of the maximum amplitude found in the wav-file, with MIDI level 39 played at 87% of the maximum amplitude, and intermediate MIDI levels are interpolated between these two?
If so, why are the wav-files not normalized (to maximum dynamic extent) prior to use (in which case the percentages would be relative to maximum amplitude possible)?
I profess to be an expert, but I am not.... I usually have to tweak these numbers with a quick experiment to verify. But I think you are correct.
That's an excellent question, and it's been suggested that I do that - normalize all samples and then re-velocity map the relative volumes. You could definitely argue that I SHOULD do that, and maybe I've just kept up old habits. The Soundfont editor Polyphone, I believe, does this.
Anyway, my workflow is like this - I first make the instrument in Kontakt, which has all of the controls of SFZ, but is much easier to tweak with GUI tools. Then, I use those parameters to create the SFZ file after I feel it's finished. So, the lovel=1 hivel=39 numbers come directly from what I've done in the graphical editor in Kontakt. I admit that I've gotten lazier in subsequent libraries and the SFZ files are pretty much the same between Yamaha and Steinway. I ran a test last night, and I could definitely smooth out the velocities more. You know, 90% more work for 10% better performance.
I record all of the velocities/volumes without changing microphone gains so I can get a real dynamic range recorded. I then use those signals without making any changes really to preserve that dynamic range. I suppose I could clone those files, normalize, and then re-adjust according to the original files. This could make for a more consistent instrument. However, I don't think the commercial libraries do this, as I've experimented with recording the same velocity on a number of notes with Garritan and Noire, and the note volumes are all over the place at one velocity.
Sorry for the long answer. Maybe I'm not up to reprocessing all of the files to compare both methods, or I'm just stubborn. Wait - There is a way for me to try the difference - and you can try too if you like. The reason I mentioned Polyphone earlier is that
hosts "remixed" versions of my pianos that have been processed like I've been discussing. As I recall, I don't particularly like these sf versions - not because they're not good, but because they don't feel like the real instrument. Since I'm the only one who actually has played the real instruments, I'm the only one who cares - so don't take my opinion. However, you can compare the differences in "normalization" philosophies if you like.