This pretty much misses the point.fuseburn wrote:Listen for yourself. You have to hear the artifacts and then decide if this is acceptable for you. It's been like this for years and many people have made great mixes with those "erroneously" resampled programs. If you don't hear a difference, be happy and don't waste another thought on it.
The reason I'm buying Nebula and its libraries in the first place is because I don't have steady (or any) access to the broad variety of hardware that is being sampled. I'll repeat: part of the value of the libraries to me is that there is some kind of direct, reliable correlation between the sound of the sampled hardware and its emulation in Nebula.
Giancarlo has now said that the SRC in Nebula is not optimal for minimizing artifacts, and he and others have confirmed that there are differences in sound that are not subtle when mixing sample rates. That tells me this is a real and substantial issue regarding the performance of the software. Saying "don't worry, be happy!" doesn't quite cut it. Again, if I wanted that, I'd just go back to using algorithmic plugs.
This issue is solvable in several possible ways, and so it would be better for the platform if it were solved. I would still like to hear from the major library developers on this.
I have a 2010 8-core mac pro, and for the kind of mixes I do (elaborate), 96k would still be an uncomfortable stretch in many cases. I'm not alone in that.fuseburn wrote:Sounds like a good solution, 44.1 and 96k. If you work at > 44.1, just upsample your material to 96k - powerful systems are cheap these days (seriously !).