Changing XML file for high quality sound and rendering?

Tips & tricks, working results, technical support
superhero81
User Level III
User Level III
Posts: 30
Joined: Tue Aug 27, 2013 11:53 pm

Re: Changing XML file for high quality sound and rendering?

Post by superhero81 » Thu Aug 29, 2013 12:54 am

Cool, I think I'll just stick with setting the timed Kernels to Clean and Even... For now...when I have more time I'll experiment more.

RJHollins
Expert
Expert
Posts: 3714
Joined: Sun Mar 28, 2010 5:53 pm

Re: Changing XML file for high quality sound and rendering?

Post by RJHollins » Thu Aug 29, 2013 1:44 am

From what I recall, the possible 'bug' was indicated when using both ODD and EVEN in TIMED mode. I can't say I could identify that issue, as switching to full TIMED mode quickly produced this pathetic 'whimpering' from my computer :shock: :roll:

I settle for Odd or Even ... not both :|
i7-5820k, MSI X99A Plus, 16 GIG Ram, Noctua NH-D14, Win-7 Pro [64-bit], Reaper-64

NVC [Nebula Virtual Controllers]

superhero81
User Level III
User Level III
Posts: 30
Joined: Tue Aug 27, 2013 11:53 pm

Re: Changing XML file for high quality sound and rendering?

Post by superhero81 » Thu Aug 29, 2013 2:33 am

I think I'll stick with CLEAN and EVEN.
Seems to be the general consensus. Haha.

Cupwise
Expert
Expert
Posts: 982
Joined: Tue Nov 23, 2010 2:03 am

Re: Changing XML file for high quality sound and rendering?

Post by Cupwise » Thu Aug 29, 2013 4:27 am

ngarjuna wrote:Before you fall down the rabbit hole that is TIMED mode, here's my suggestion: ask someone who swears by TIMED mode being "higher quality" or "better sounding" to compare 2 audio samples, one in normal FREQD and one in TIMED. I've asked in several TIMED threads and in all the years I've seen this tweak nobody has ever delivered. Makes me wonder about this "miraculous" Nebula tweak that nobody has ever tried to demonstrate in their zeal to promote it on the forums. Personally I wouldn't suggest using Nebula libraries in ways other than their developers hand tweaked them (there are a few developers who produce TIMED programs; that is a different story, obviously, as you're running the program as its creator intended) unless you've heard (and prefer) the difference.

Further: if TIMED is so far superior to FREQD why haven't hardly any of the developers started using it for "High Quality" or "render quality" variants of their programs (I only know of 2 or 3 devs making TIMED programs)?
my take on the whole timed thing is, funnily enough, that i'm not so super sure that timed mode itself does sound so much better than freqd. i haven't made tons of comparisons just between the two modes. i say 'funnily enough' because i'm one of the devs releasing programs that i call 'shq' that use timed mode.

so what am i saying? well, here's the thing. in that transient thread you can see plenty of confusion and lots of it, in my opinion, is that you had plenty of people comparing the two modes, but they weren't making sure to ONLY compare the difference between the two modes. i pointed out a few times that you need to make sure there are no other variables changing, which could also affect the results, because otherwise you aren't comparing differences just between timed and freqd mode, you are comparing multiple things being changed. that's totally unscientific, and of course, it's not going to show anyone what differences may exist between the two modes.

the main example is that if you switch all or some kerns to timed mode, it can change the prog rate. this DOES have an effect on how nebula works, and this is just a simple fact. it dictates how often nebula can act on switching between samples based on the level coming in, for dynamic effects. any kind of program that has samples being interpolated between for any reason, not just dynamic ones, will also be affected. a 20ms program rate means nebula can only dictate changes between samples every 20ms.

so if you switch to timed mode, and program rate changes, now you are NOT just hearing differences between timed and freqd. you aren't. if you REALLY want to hear any differences between the two you will have to make sure to keep the program rate the same after the change. you can go into edit and glob and turn the 'rate s' parameter to on and then adjust 'rate' to match what it was in freqd. you'll have to do it that way because freqd can't have faster rates like timed can, so if a change happens when you switch to timed, it will be that the prog rate gets faster, and to really compare the two modes ONLY, you will have to increase the rate back to what it was before you switched.

you would have to render SEVERAL different clips in both modes, making sure to adjust the prog rate as i described after switching to timed, and then compare all those clips, to really get a good idea of a difference. here's why- ever notice how reverbs sound different every time you apply the same program's reverb to the same drum loop? if you have an identical drum loop repeating multiple times, the same snare may trigger the reverb differently each time it's hit. this has been a known thing and people have asked about it. know what that is? program rate. for reverbs it's even higher. in 44.1khz its all the way up to 180ms. 96khz i think its 80ms or so. reverbs are also dynamic. so what this means is that nebula is measuring the incoming audio's volume level to decide which dynamic reverb impulse to play, but it's averaging the measurements from blocks of 180 or 80ms (depending on the program rate), and then taking that value and using that to pick the dynamic sample. i'm simplifying a bit here but this is basically what happens. the problem with that is that those 180ms chunks are not in any way synchronized with your drum loop. this means that when the snare hits one time, it could be right at the front of a 180ms block, which means neb won't react to it for almost 180ms. so the appropriate dynamic step for that snare's level comes late. the next time, it may come right in the middle of a block, so neb's reaction will be sooner. that's why you get different results and that's why it's more noticeable with reverbs. the longer program rate means that you kind of get like a seemingly random reaction to each drum hit depending on where it falls in the program rate blocks, which are not synchronized to your project or anything.

SO, back to what i said earlier about comparing the two modes. i said if you want to compare whether timed mode itself sounds better than freqd, then you'd have to make sure to keep program rate the same. but there's another side to this. the fact that timed mode allows faster program rates (in fact you can adjust it to anything, no matter what the kern length is, unlike with freqd where program rate is tethered to the length), is, to me, it's biggest benefit.

faster possible program rates is something that you can't argue with as being little to no difference. if you do, you'd be just plain wrong. it's measurable, and it's visible by looking at waveforms rendered with slower vs faster program rates. it's going to be more important with some types of programs than with others. compressors act more consistently on the transients coming in, with a faster program rate. this is why the default compressor template/setup always has freqd kerns at 10ms, which allows a faster prog rate of ~2ms than preamp programs which load at 50ms and have a ~20ms program rate.

so with comps, faster program rates are needed to catch transients and play the appropriate dynamic samples which is where the compression comes from. with a typical preamp program (especially a solid state one), it's not going to make AS much of an important difference, because high end preamps are designed to give you very similar results at lower inputs as they do at higher ones, until you start clipping. tube stuff may change a bit more as your input gets hotter, and that may or may not be something you'd notice with a faster program rate compared to a slower one. the faster program rate would allow a more consistent catching of the transients to be colored by the higher level dynamic samples, but it still won't be as noticeable as with a compressor, unless there is a really heavy non-linear behavior in that tube pre and in the samples taken from it. i still think it can matter with preamps, but i'm just saying it's usually not AS noticeable as with compressors where it makes the difference between the comp compressing the transients more consistently.

i already talked about reverbs and there it would definitely be great to have faster program rates so that there would be a more consistent response to the input.

lastly, with an eq it barely matters at all. because vast overwhelming majority of EQ programs out there are not dynamic. they are totally static. so nothing is changing as audio goes into them. the only change happens when you adjust a band, and so the program rate would come into play there, but only WHILE you are adjusting it, and it only affects how smoothly the sound changes, which isn't really so important for normal eq purposes. if you are automating a control change then program rate would matter. but if you are just setting an eq and leaving it, program rate doesn't matter, as far as my understanding goes. there are no dynamic samples, so program rate has nothing to dictate there. this is one thing that kind of bugs me, to see people talking about transient response with regards to eqs. it doesn't make sense. because there are no dynamic samples (again there are a FEW exceptions, but as far as i know it's VERY few), so there is no envelope follower measuring the input level going into the eq program. i've even seen people talk about adjusting the detection type from RMS to EVF or whatever and getting better results with EQs. which can't be, because 99% of EQs do not USE the envelope follower/detector. the transients and everything else are just going into the eq, and getting a static convolution. a low level signal gets the same exact effect as a higher level one. sorry to say this, but none of the way the actual hardware eq affected the transients is actually translated into nebula. how could it be? the only way that happens is with dynamics. all a static program has is the frequency response and phase response, and harmonics. but the transient of a drum will get the same freq/phase/harmonics as the body of that drum, with static convolution. the transients are not handled any differently from anything else in the audio. if what i'm saying is akin to throwing a gauntlet down here, or going against things others have said, so be it. i'm not wrong about this.

but back to program rate. for me, it's the MAIN reason to use timed mode. it's why i do it. i could show you waveforms from a comp program with a slower vs faster prog rate and you could see how the transients are caught much less consistently with a slower rate. or you could do it yourself (which i'd prefer). the thing about it though, is that i do this as a DEV. nothing i've said here, is meant to be like 'hey guys check out this tweak you can make to your programs to make them better'. no. it's what *I* do to *MY* programs because i think it's a benefit for people willing to spend the extra time/cpu for render to get a more consistent result. and *possibly* a small benefit from timed mode itself sounding better than freqd (again i don't know about that so much, but it's less important to me than the prog rate change). the thing is, messing with this stuff gets into dangerous areas. you can cause lots of artifacts to happen. there are ways of minimizing them, though. different specific program rates are like 'sweet spots' where these artifacts are much lower, and by the way, they are always there, even with freqd. i've even seen cases where all kerns in timed mode resulted in FEWER artifacts than freqd. you really just have to experiment with lots of different factors to find a good spot where the sound is great, you have a good fast prog rate, and acceptable artifact levels.

basically what i'm saying is, all this stuff i've said about prog rate, in my opinion it's not something for end users to concern themselves over unless they have all kinds of time to experiment with the stuff. as a dev, i have that time. and one more thing- i understand that there are other devs out there making their own stuff, and there's always this statement of 'well they optimized it to work like it's supposed to'. so the stuff i say in this forum sometimes may come off like i'm saying that, well, if i'm talking about ways i think things can be improved, then how else will that come across? but here's the thing- i can't worry about that. i look for ways to get better results, and i think there is still room for improvement for ALL program types. so in my opinion, nobody's programs are perfectly optimized to be as close to the hardware as nebula allows. sorry. true fact. i think tons of programs out there still sound great though. but everything i said about program rate can be measured and proven. the only thing that calls the advantage of faster prog rates into question, the ONLY thing, is the danger of artifacts. i'll contend that my releases where i've given timed mode programs have acceptable artifact levels that will be imperceptible even with multiple processes, and not perceptibly worse than if the programs used freqd mode. but the benefit will likely be perceptible. with my comps that have 'shq', anyone should be able to hear a more consistent handling of the transients compared to the non shq versions, which is due to the program rate.

i really don't want any of this to come off like i'm trying to butt heads with anyone. but on the other hand i feel like i should be able to discuss thing i've done that i think are advances. and i feel the need to point this out, but with regard to that tired old statement of 'the devs set it to how it's supposed to work', that's not ALWAYS the case. i'm not going to say anything too specific here, but let me just point this out. i have plenty of libraries, and i can look at plenty of the preamp style programs i have, for example, and tell that they are using the default template for preamp programs. now, that's fine. it works. but it becomes a simple matter of fact, that if that default template (which was designed and provided by acustica) was used for that program, then that dev didn't do anything custom to that program. that's just how it works. either they did or they didn't. if they didn't, then phrases like 'they set it to how it should be to reflect the hardware' means nothing. because if the default template was used, they did nothing to that program to that end. so when i see people telling others that it's like sacrilege to adjust any program because the dev fine tuned it to be how it's supposed to, i have to laugh. because that isn't always the case, at all. from what i've seen, it usually isn't. acustica made the default templates. just using those doesn't constitute fine tuning a program to behave like hardware. it still gives a good result, which really is a testament to acustica and their forethought with this stuff. like i said, the compressor templates were set to only have 10ms lengths because giancarlo/acustica knew what they were doing and that they needed quicker program rates than with preamps programs. that was acustica designing a template to allow for compressors in general to be emulated better than the preamp template would allow. so anyone using that template can't take credit for that, unless they actually were involved in the development of that template, with acustica. all i'm saying here, is that if i can look at two different programs from two different devs, and glance at the kern page, prog rate, smooth type, FUN page, evf page, etc, and see the same exact stuff there, and recognize it as default template stuff, neither of those programs can be said to have been 'set to work like the hardware' by those devs. sorry. true fact. because the default compressor template may be better for compressors in general than the preamp template would be, but each hardware comp is different. the default template is generic. it's not geared towards a specific piece of equipment. two different compressors may behave totally differently, so how could use of the default template for both of them provide the most accurate results for both? if the default template is used (and often it is), then there is no customization on the part of the person using it. that's why it's called a template. it's really just that simple.

all i'm saying is that the sentiment some people express that you are treading on sacred ground if you adjust parameters in some of these programs is just bogus. i would ONLY even think of agreeing with that if i could see that that program ACTUALLY was customized in some way, and not using a default template provided by acustica. and i would be willing to bet that most programs out there are using those default templates. it's easy to check.

on the other hand, i don't think users should go prying around, messing with stuff, unless they know what they are doing or are willing to put LOTS of time into figuring it out. and unless they understand that they shouldn't save over the original versions unless they make backups first, and that if they get funky/bad results after modifying stuff, that's their own fault and they can't cry to the dev about it. and if they are putting that much time into experimenting with these things, they might as well be a dev, because it takes a lot of time, mucking around, to figure out what works and what doesn't.

User avatar
ngarjuna
Expert
Expert
Posts: 779
Joined: Tue Mar 30, 2010 5:04 pm
Location: Miami

Re: Changing XML file for high quality sound and rendering?

Post by ngarjuna » Thu Aug 29, 2013 11:44 am

Well that's an interesting post with a lot of thought provoking information, Cupwise.

The only thing I'd like to say is: my whole point was not a matter of sacred ground but rather a matter of giving new users bad advice. I don't jump into TIMED threads where experienced users or developers are discussing their tweaking and testing because RJ Hollins, for example, doesn't need any of my advice about how to best use Nebula. But when someone comes to the forums and clearly doesn't even know how to use Nebula yet (all I'm saying is that there is a bit of a learning curve) and starts asking about TIMED I think that's an appropriate time to issue warnings that this isn't a no compromise, automatic improvement; and, more importantly, that whether or not you consider it an improvement you might in fact be decreasing fidelity to the hardware. Which is exactly what you said in one of those paragraphs there, that this is not something that you would advise for new users (for users at all I believe was your point) and that FREQD, in some cases, would be the higher fidelity choice if I understand you correctly.

If this was a simpler subject that would be one thing; but it's clear from your rather detailed understanding of the issues that there are tradeoffs either way. So I return to my original suggestion that the program developer has a better view of the various tradeoffs and issues their programs are bound by than end users (unless, as you said, that user just has all kinds of time for proper testing and comparison; personally, I have way too much work due every day to even consider that possibility); not to mention the actual hardware unit in question (not just the make/model) with which to compare. So on that basis: how does a guy with VST analyzer compare to a developer with all the info they have access to in terms of hardware original fidelity (and unlike many VSTs where the old adage 'it doesn't matter how much of an LA5A it sounds like, it matters how good it does its job' applies, imho in Nebula it actually does matter to a larger degree how much something sounds like what it's being sold as).

Like I said, if people want to tweak and test and compare, have at it, there's a whole big beautiful engine in there with lots of exposed parts. But when those same people who return to the forum because they made a bunch of tweaks to the engine and now some latest, greatest reverb library won't play back correctly, what then?. To the people handing out advice about how to switch from FREQD to TIMED: what's your guaranteed level of support when this advice affects their ability to use some other program or library?

brp
User Level IX
User Level IX
Posts: 99
Joined: Tue Mar 30, 2010 1:02 pm

Re: Changing XML file for high quality sound and rendering?

Post by brp » Thu Aug 29, 2013 3:48 pm

hey guis

i'll explain it to you!

freqd works with FAST fourier transformation (fft) and timed works with furier transformation (dft). so freqd logically has the typical windowing desease like all those cheap spectrum analyzers where you can see actually whats wrong with it ;-)

the only reason for choosing freqd is its cpu-friendlyness!!! that's always true, for every program!!!

BUT:

the compromise with freqd can be VERY different (logically) depending on kernelsize and prograte etc.
you can imagin the kernel (or a part of it) as the fft window which needs windowfunctions which is nothing else than fadein and fadeout at start and end of the window.

maybe g can give here a hint how nebula gets its windowsizes :roll:
because i dont know this. it can also be dependant from samplerate and nebulas buffersize (which i beleve is not cause it would sound different on small buffers).

as you now can imagine, fft will work better for big windows than small ones and the same is true for prograte.

transientloss: if there is a transient at the beginning of a window, it will be fade in and fade out at the end...

so use freqd for reverbs and timed for compressors. try to lower the prograte when you loose transients on a static eq program, if this doesnt satisfy, switch to timed and hope your cpu still likes you afterwards!! ;-)


all the best
pascal

User avatar
Tim Petherick
Expert
Expert
Posts: 1731
Joined: Sat Apr 17, 2010 4:07 pm
Location: Bath , Uk

Re: Changing XML file for high quality sound and rendering?

Post by Tim Petherick » Thu Aug 29, 2013 4:31 pm

brp wrote:
maybe g can give here a hint how nebula gets its windowsizes :roll:
because i dont know this. it can also be dependant from samplerate and nebulas buffersize (which i beleve is not cause it would sound different on small buffers).


Stated this myself about buffer sizes.....
Especially when I tried to explain to people about using freqd on compressors!

All my main releases on compressors have used TIMED for this reason. Also I have wanted to use it on some of my dynamic eq's too but its just too heavy , i've mentioned buffer size affecting quality because of program rate changes in freqd before but it was totally missed.

Thats why I stated that it would be good to get down to lower buffer sizes than 128

http://www.acustica-audio.com/phpBB3/vi ... 4&start=20

http://www.acustica-audio.com/phpBB3/vi ... +64#p13501

I'm thinking one thing that may change this whole problem will be Nebula H because I think we are going to get faster latency's.....

Let's hope that Nebula H will happen.




Tim

Cupwise
Expert
Expert
Posts: 982
Joined: Tue Nov 23, 2010 2:03 am

Re: Changing XML file for high quality sound and rendering?

Post by Cupwise » Fri Aug 30, 2013 3:29 am

ngarjuna wrote: The only thing I'd like to say is: my whole point was not a matter of sacred ground but rather a matter of giving new users bad advice.
i get that.
ngarjuna wrote: I think that's an appropriate time to issue warnings that this isn't a no compromise, automatic improvement;
the thing is that if you know what you are doing and do it right, the only compromise is going to be cpu use goes up.
ngarjuna wrote: and, more importantly, that whether or not you consider it an improvement
like i said, the gains you can get from having a faster program rate is a measurable thing, and i don't see how anyone could not consider it an improvement. if you can get it with minimal (imperceptible) to no increase in artifacts after switching from freqd to timed. in the analog world, things react instantly. nebula's prog rate isn't reflective of that. it's only making changes (whether dynamic or other cases of sample interpolation) once per block of prog rate time. analog world doesn't act like that. so it's an indisputable fact that a faster program rate would ALWAYS be better, if it could be obtained without any other ill effects happening. which again comes down to artifacts. so again, if the faster program rate can be had without a perceptible increase of artifacts, i'd consider that an indisputable improvement, with the only tradeoff then being CPU.

i explained the effect that having the much slower program rates has on reverbs. it's why they sound like they are behaving differently each time a drum happens. a real plate or spring or other reverb won't sound like that. that's because they don't have a 180ms program rate restricting their ability to react. nobody can argue that having a faster program rate wouldn't benefit reverbs, if it was possible. the same logic applies to everything else though too, really. reverbs just highlight the issue more because of how long their program rates are and you can actually hear the results a lot more clearly. but consider that all of your preamp programs are reacting up to 20ms (prog rate for preamps) slow on some transients. how can they be said to be treating transients accurately to the hardware in that case? a 20ms prog rate means up to 20ms late reaction.

sorry, but no matter what you think, that program cannot be said to 'be like the actual hardware' with regards to transients. 20ms can entirely miss a transient altogether. again, this is why compressors have the smaller length, to allow ~2ms program rates. and even then, even THEN the transients are still handled inconsistently. you can hear it. faster program rates would always be better, ALWAYS, barring any bad side effect.
ngarjuna wrote:
you might in fact be decreasing fidelity to the hardware.
sorry but to me that statement doesn't really mean anything. 'decreasing fidelity to the hardware'? the hardware doesn't have a 20ms program rate that only allows it to react dynamically every 20ms. it reacts instantly. 20ms isn't instant.
ngarjuna wrote: Which is exactly what you said in one of those paragraphs there,....and that FREQD, in some cases, would be the higher fidelity choice if I understand you correctly.
no. i wasn't saying that. i never said anything about fidelity or that freqd would ever be better. i said faster program rates were better. but i said that they may increase artifacts. but the point is, that you may be able to fine tune the prog rate to a specific rate where the artifacts aren't significantly increased over freqd. this takes lots of testing and stuff that i don't think typical users are going to have time to do or figure out.

i'm basically defending my use of timed as a dev. but see here's the problem. if i start using timed in my stuff, then it's obviously because i believe there is a benefit (faster prog rates being the main one). if i think faster program rates can benefit my programs, why would that only apply to MY programs? it wouldn't. its a fundamental part of how nebula works. a preamp with a 20ms program rate cannot be said to handle transients 'like the hardware', especially if on top of that slow program rate it's also using RMS or something similar as it's detector which uses averaging. to me, THAT takes from the 'fidelity of the hardware', whatever that means.
ngarjuna wrote: If this was a simpler subject that would be one thing; but it's clear from your rather detailed understanding of the issues that there are tradeoffs either way.
again, only tradeoff is cpu use if you do it in a way avoiding significant increase of artifacts. otherwise there are no tradeoffs. it's a measurable, provable improvement.
ngarjuna wrote:
So I return to my original suggestion that the program developer has a better view of the various tradeoffs and issues their programs are bound by than end users
i kind of don't agree with this sentiment, in general. it may or may not be true. the true facts of the matter are that anyone can sample something with NAT and get a great sounding result. have an expensive tube preamp? hook it up, use a default preamp template, and you have a good sounding program. you don't have to know anything about how it works. acustica made that possible. they provided the templates, which i talked about. those templates carry most of the water. anyone can use nat and sample an expensive preamp, and just use the preamp template and NAT, both which were designed by acustica. in that case, that person would have done nothing to that program. all they really did was hook a preamp to their computer, maybe calibrate levels and maybe some tests on the hardware end, etc, but nothing was done on their part to the actual program side. i gave you a list of things you can look for to see if a program has been customized in any way by a dev. anyone can look at those things and see if a program has been customized beyond the acustica provided templates. now, what i don't agree with is this part of your wording:
"program developer has a better view of the various tradeoffs and issues their programs"

i only agree with that if that developer has actually put effort into customizing their stuff or testing/seeing if there are any possible gains. if they are relying on templates that were provided by acustica, how can you really ever say that they understand anything about the program itself? i'm not calling out any specific devs or even saying i never just used the acustica provided templates myself. but there is this mysticism surrounding sampling, and a lot of it is bunk. if someone uses a template as it was provided by acustica, and makes no further adjustments to that program, then, really, it was ACUSTICA who made that program (people use the word 'program' generically to describe a nebula effect. i'm differentiating between the actual program and the vector. in my scenario i just gave, the person made the vector, which is what contains the samples. acustica made the program. the program dictates how those samples are used so it's at least equally as important as the vectors if not more). all the 'dev' would have done in that case was the sampling. NOT the programming. anyone could do that, with minimal time and effort. that's a testament to acustica, nebula, and NAT. in that scenario, the person should get very little credit, for simply hooking a preamp to their A/D D/A and sampling. it's NAT that did all the work and made something out of it.

you have two different aspects. the sampling. and the program. if i use the default template from acustica and never customize or change it beyond that, then i did nothing to the program. i have proven nothing about my understanding of that program, and i can't be said to have fine tuned or tweaked it in any way. this is just true fact. so these general statements that all devs always understand the tradeoffs or settings of their programs best than anyone else will, i don't agree with that. because there are things i myself don't understand. i don't think anyone understands ALL tradeoffs or all things that there are to understand about those settings. nobody. but when probably MOST of the programs out there are using default templates, which means most of those programs were actually designed by ACUSTICA, well, that really kind of flies squarely in the face of the whole 'dev has it set the best way' statement. if it is set the best way, it'd be because acustica made the template for that program (essentially acustica made the program).

i've seen plenty of programs where the only thing done to it was to change the padin/padout settings (including the first things i made). that's hardly a customization. and again, a 20ms prog rate means preamps are missing transients entirely, so if you consider that the 'best way', well.. i disagree. and again, i'm not saying every program HAS to be customized or that a program using the templates won't still sound good, because they do. i'm just saying that the templates work well, but not necessarily in the BEST possible way. to get to the BEST possible way requires fine tuning. it takes experimentation. it takes work. it takes time and effort. either it's done or it isn't.
ngarjuna wrote: imho in Nebula it actually does matter to a larger degree how much something sounds like what it's being sold as).
ok, well if that matters to you go find me a preamp that reacts dynamically based on 20ms blocks which it averages before using the next dynamic result (amount of 'saturation', harmonics, etc). you won't. personally i don't think i ever said in any of my marketing that any of my stuff with a program rate of over 10ms still treated transients like the hardware. if i did say that, i would be patently wrong. especially if it was an equalizer that doesn't even have dynamic samples which cannot possibly ever hope to recreate the 'way hardware handles transients'. but even a preamp... you need a faster program rate to even hope to have it act like the hardware. 20ms and RMS detectors completely miss transients. a hardware preamp doesn't. this is scientific fact. if you think i have my facts mixed up, explain to me how a system that works on windows or blocks of time around 20ms long (equal to the program rate) can catch a transient that happens almost instantly. it can't.
ngarjuna wrote: Like I said, if people want to tweak and test and compare, have at it, there's a whole big beautiful engine in there with lots of exposed parts. But when those same people who return to the forum because they made a bunch of tweaks to the engine and now some latest, greatest reverb library won't play back correctly, what then?.
that'd be on them. hopefully they know better than to blame the dev they bought it off of, and hopefully they saved the original library before doing it.
ngarjuna wrote: To the people handing out advice about how to switch from FREQD to TIMED: what's your guaranteed level of support when this advice affects their ability to use some other program or library?
it's only going to affect the programs they make the switch with..

User avatar
ngarjuna
Expert
Expert
Posts: 779
Joined: Tue Mar 30, 2010 5:04 pm
Location: Miami

Re: Changing XML file for high quality sound and rendering?

Post by ngarjuna » Fri Aug 30, 2013 4:57 am

Maybe it's just me, but I really feel like you're going back and forth to argue with me. Which is fine, I guess, have at it…but I'm failing to see your point, honestly.
i'm basically defending my use of timed as a dev.
Yeah, I get that. But I don't get why. Maybe you missed the part where I said that it's entirely appropriate for devs to test and tweak their products (which is so obvious it probably doesn't even need saying). Or even that when experienced non-devs delve into tweaking that's a different matter and I've done my best to just stay out of those threads (I think those threads are like land mines for noobies and are exactly how we ended up here today which is the criticism about TIMED that I have voiced in the past). If you're under the impression that I've questioned or challenged your use of TIMED in YOUR libraries then that's mistaken, I even said as much I believe. But you've twisted that into some kind of developer-worship where the unwashed masses should just do as their told. Nothing could be further from what I was saying.
the thing is that if you know what you are doing and do it right, the only compromise is going to be cpu use goes up.
Well that's a pretty significant caveat though; you yourself were the one who said the amount of time it would take to do this effectively would make a person a de facto developer. So in reality there are two huge compromises, the time spent and/or the artifacts that result from a less than perfect approach to the tweaking.
. so it's an indisputable fact that a faster program rate would ALWAYS be better,
There's a lot I don't know, so maybe I'm barking up the wrong tree; but I was taught that there is always a tradeoff when you're doing frequency/time transformations. Increasing the accuracy of the time domain would normally have the automatic effect of decreasing the accuracy of the frequency domain. Is this not the case with Nebula's transformation algos? I'm asking that as a question not trying to suggest an answer, btw.
sorry, but no matter what you think, that program cannot be said to 'be like the actual hardware' with regards to transients.
Come on, dude, that's just pedantic. You know exactly what I'm saying when I suggest that Nebula programs rely mostly on their ability to recreate as much of the sound of the hardware from which they're sampled as possible. That's what made the classic Nebula programs classics, that's what makes the best selling Nebula programs best selling. This whole "there are inherently going to be differences" schtick goes down under yeah, of course, but you're really not seeing the trees through the forest.
sorry, but no matter what you think, that program cannot be said to 'be like the actual hardware' with regards to transients.
A program doesn't have to null with the hardware to "be like" it; in fact, that's the precise function of a simile in language: that while there are some differences there are also noteworthy similarities being compared. The very meaning of the phrase "be like the actual hardware" means something distinctly different than "is identical to the actual hardware".
if i think faster program rates can benefit my programs, why would that only apply to MY programs? it wouldn't. its a fundamental part of how nebula works.
Wait a second…who ever said or suggested anything of the sort? I suggested that a person who sampled the hardware (has access for valid listening comparisons), a person who had (and will continue to) spend hours and hours on the project of sampling and convoluting, a person who has creative impetus of what this program set should be in the first place is the person who should be tweaking the engine for a particular library. If a user falls into that category (and it's quite possible that many users could if they wished to), then it applies to them too; if another dev falls into that category, then it applies to them too.
a preamp with a 20ms program rate cannot be said to handle transients 'like the hardware', especially if on top of that slow program rate it's also using RMS or something similar as it's detector which uses averaging. to me, THAT takes from the 'fidelity of the hardware', whatever that means.
I guess there must be more to audio processing than transients because people who prefer Nebula don't seem to be as hung up on this 20ms program rate as you are. I'm guessing there are a lot of VSTs out there that respond quite a bit faster than Nebula and yet…here we all are.
if i use the default template from acustica and never customize or change it beyond that, then i did nothing to the program. i have proven nothing about my understanding of that program, and i can't be said to have fine tuned or tweaked it in any way. this is just true fact. so these general statements that all devs always understand the tradeoffs or settings of their programs best than anyone else will, i don't agree with that. because there are things i myself don't understand. i don't think anyone understands ALL tradeoffs or all things that there are to understand about those settings. nobody. but when probably MOST of the programs out there are using default templates, which means most of those programs were actually designed by ACUSTICA, well, that really kind of flies squarely in the face of the whole 'dev has it set the best way' statement. if it is set the best way, it'd be because acustica made the template for that program (essentially acustica made the program).
That's perfectly reasonable. I can't speak for your processes but having conversed with many of the third party developers selling libraries I don't know of many developers selling libraries using just stock templates. I'm not saying it never happened or even that I'd know the difference; for all I know my very favorite Nebula programs were set-it-and-forget-it sampling sessions. But I do know that when I speak to developers they volunteer (not that I've ever asked) about all the hand tweaking that had to be done to their libraries to get them shiny; and from the forums you certainly give that impression as well with all of your testing and tweaking and updates (I say that as a compliment, by the way, I think you've made some really great Nebula libraries, some classics even).

You said earlier:
basically what i'm saying is, all this stuff i've said about prog rate, in my opinion it's not something for end users to concern themselves over unless they have all kinds of time to experiment with the stuff

on the other hand, i don't think users should go prying around, messing with stuff, unless they know what they are doing or are willing to put LOTS of time into figuring it out.
Which was pretty much exactly my point. So why are you taking me task over a point you explicitly agree with?

Cupwise
Expert
Expert
Posts: 982
Joined: Tue Nov 23, 2010 2:03 am

Re: Changing XML file for high quality sound and rendering?

Post by Cupwise » Fri Aug 30, 2013 6:25 am

ngarjuna wrote:Maybe it's just me, but I really feel like you're going back and forth to argue with me. Which is fine, I guess, have at it…but I'm failing to see your point, honestly.
i made plenty of points. i'm not trying to argue. i'm just expressing places where i disagree.
the thing is that if you know what you are doing and do it right, the only compromise is going to be cpu use goes up.
Well that's a pretty significant caveat though; you yourself were the one who said the amount of time it would take to do this effectively would make a person a de facto developer. So in reality there are two huge compromises, the time spent and/or the artifacts that result from a less than perfect approach to the tweaking.
ok but the difference is that now you are talking about tradeoffs in terms of the person's time spent. i was talking about solely tradeoffs in the quality that nebula puts out, which when you said nebula should be more like hardware, i took that as you thinking the quality is important.
There's a lot I don't know, so maybe I'm barking up the wrong tree; but I was taught that there is always a tradeoff when you're doing frequency/time transformations. Increasing the accuracy of the time domain would normally have the automatic effect of decreasing the accuracy of the frequency domain. Is this not the case with Nebula's transformation algos? I'm asking that as a question not trying to suggest an answer, btw.
i haven't seen a decrease of frequency accuracy when using timed mode with any of my tests. in fact, with timed mode you can have a faster program rate while having LONGER kern lengths which allows you to have a MORE accurate low end.
sorry, but no matter what you think, that program cannot be said to 'be like the actual hardware' with regards to transients.
Come on, dude, that's just pedantic. You know exactly what I'm saying when I suggest that Nebula programs rely mostly on their ability to recreate as much of the sound of the hardware from which they're sampled as possible. That's what made the classic Nebula programs classics, that's what makes the best selling Nebula programs best selling.
yeah i know what you mean. accuracy means accuracy. only problem is that transient response is one element that goes into making something accurate. and if a program is set to have a 20ms program rate, the transient response isn't there. it's that simple. you can say vague things about quality, and yeah, i'll agree that there are lots of great sounding programs out there which use 20ms program rates. but the fact of the matter is that they can't be said to accurately handle transients the same way as the hardware. why am i even bothering saying this? because lots of claims have been made that they are handling those transients accurately. it's false. it can be demonstrated. i'm not saying those programs suck. i'm saying they can't possibly handle transients like the hardware. 20ms is much too slow of a program rate for that. fact. and that's all i was saying there. it's not pedantic. it's me pointing out something which may be an inconvenient truth or something people don't want to admit. but that's just me being honest.
A program doesn't have to null with the hardware to "be like" it; in fact, that's the precise function of a simile in language: that while there are some differences there are also noteworthy similarities being compared.
nobody ever said anything about nulling. a tube preamp works in such a way that if you send a transient with a loud peak into it, that loud peak is saturated instantly. it gets the level of harmonics that it gets from that preamp, as dictated by the level of that peak and how the preamp handles sound at that level. a program made from that preamp, which uses a 20ms program rate, will not do that. it will MISS that transient. that transient peak will use a dynamic sample taken at a lower level, not at the higher level. it WILL NOT get the appropriate level of saturation as it would in the hardware. and not only that but most preamps are using EVF17 detection or RMS which even further delays their reaction to the level coming in. preamps don't do that. they react instantly. i'm laying this out in basic plain english words and being specific. this is how it works. i'm just trying to demystify this a bit. people can say this or that program reacts to transients like the hardware all they want, but if it has a slower program rate, they are wrong. period.
I guess there must be more to audio processing than transients because people who prefer Nebula don't seem to be as hung up on this 20ms program rate as you are. I'm guessing there are a lot of VSTs out there that respond quite a bit faster than Nebula and yet…here we all are.
that's not the point at all. the point is that people have talked about transients and how accurately nebula handles them. this all started because you made a post calling into question whether timed mode actually had any benefits. all i've done is give examples of them. and i've made an effort to demystify some things that i think have been overcomplicated. people have talked about how this or that program handles transients. i'm just saying that, if it has a slow program rate, it doesn't. and it definitely doesn't like the hardware. i've already said (i think in every one of my replies to you) that that's not the only factor though, so there's no need to act like i haven't acknowledged that this is only one factor of what goes into this stuff.

that said, you said you thought accuracy to hardware was more important to nebula, and this IS a factor that goes into that. now you're acting like you don't think it's important. transient response may be only one thing, but it's one thing that hasn't been there, and now with the more powerful cpus out, i think it can be. so to me, as a dev, that's cool.

I don't know of many developers selling libraries using just stock templates. I'm not saying it never happened or even that I'd know the difference;
well the thing is that if you don't know what to look for then you can't know. i told you what to look for. let me just say a general statement. there is hype all across the audio industry. why should we just assume that that doesn't at all exist in the nebula world? why has this community always kind of taken that truth to be 'self evident'? all i'm doing is raising some questions. and yeah maybe making a few waves but it's in my nature. i've always been careful to avoid ever saying things like that my programs handle transients perfectly. i've never made those claims, and honestly in the past i didn't even care about that specific issue. you're asking me why i care about that so much but again, i've never went out of my to make that claim. but now it kind of creates a funny situation when people have already talked about how programs that are definitely using 20ms prog rates just like the default templates (which were designed what, like 5 years ago?) are somehow still handling transients accurately, when only recently i think CPUs have improved to where timed mode and faster prog rates may allow for more accurate transient response. and yet, it's no big deal because supposedly we've already had that. but only, we didn't. but because it's something that has already been claimed of this or that library, it's like the fact that now we can actually have it is diminished.


i mean either you care about accuracy or you don't. if you do, then you can recognize that timed mode can get us closer. and if so, stick around because i'm uploading a program that demonstrates exactly what i'm saying about transients and how they are handled by the default template (which most preamp programs are probably using) which uses a 20ms prog rate. by the way i've never done this before but i knew exactly what was going to happen and it's exactly what i expected would happen.

brp
User Level IX
User Level IX
Posts: 99
Joined: Tue Mar 30, 2010 1:02 pm

Re: Changing XML file for high quality sound and rendering?

Post by brp » Fri Aug 30, 2013 6:36 am

hi tim

are you saying nebula reverb sounds different on different buffersizes or just the non reverb nebula? as far as i know, here lies the main difference between the two versions. the non reverb nebula will do some truncation at little buffers to not rapeing the cpu. but the reverb version should sound the same unless the buffer is getting smaller than the fft window (nebula would smaller the window or truncat it in some way).

in your case with the dynamic eq i'd test the mixed mode useing the reverb version! i'd try to avoid small or truncated fft windows, because the fades of the windwfunction are becomeing big in relation to the whole window meaning the sound gets kind of blurry!! when there is no dynamic program change, the bug can lay somewhere else. i think you were fooled by hearing some artefact introduced by some truncateing (but not 100% sure). check first, if everything is set right for dynamics (rawfun1, rawfun2...) and experiment with smooth, prograte, liquidity (it should kind of blend between the kernels), kernellength, etc...

i definitely hope that nebula H will be better in useing multicore cpu, so that timed kernels can be used all the time ;-)

brp
User Level IX
User Level IX
Posts: 99
Joined: Tue Mar 30, 2010 1:02 pm

Re: Changing XML file for high quality sound and rendering?

Post by brp » Fri Aug 30, 2013 7:11 am

to all those people who dont see though the whole tweaking subject:

nebulaprograms are a bit like cars. when they leave the factory, they usually got optimised for fueleconomy (vs. cpueconomy). but if you buy such a car and want to win a race, you'd have to tune it and give a sh*t about fueleconomy. it's that simple!!

Cupwise
Expert
Expert
Posts: 982
Joined: Tue Nov 23, 2010 2:03 am

Re: Changing XML file for high quality sound and rendering?

Post by Cupwise » Fri Aug 30, 2013 7:15 am

ok i've just uploaded a test program that anyone can use to see what i'm talking about here. and, for maximum transparency, i made it available in a way that some people probably aren't familiar with. instead of an .n2p (program) and an .n2v (vector), what i have is still essentially the same. but instead the program is in the form of an .xml file (actually there are two of them), and the vector is a folder with several .wavs (the actual impulses) and another xml.

this is what programs look like before they are encrypted into n2p/n2v files.

what i've done is opened NAT, loaded the 7k offline preamp session, then i reduced the number of kernels down to 1k. i didn't change anything else, and just hit 'generate' which spits out big .wav with all the tone sweeps at various dynamic levels which is what you run through the preamp or whatever you are sampling. only, i didn't touch it or run it through anything. i just hit 'deconvolve' which makes a program out of it. with the NAT available to devs you get the n2p/n2v AND this xml/folder generated.

to be able to load this like you do a normal program, you just put the two xmls (not the one in the folder with the .wavs) into your 'programs' folder. then put the folder with the .wav impulses and the other xml into your vectors folder.

what's the point? well, after NAT made those impulses i went in and opened the lowest level one in an editor. then i dropped it's volume level by around 90dB. each of the impulses have filenames according to what dynamic level they are taken/sampled at. the lowest one is named -43.50dB. so there are 30 files ranging from 0dB to -43.50dB, at 1.5 dB intervals. now it might be confusing but originally before i lowered the level of that one, all of the actual impulses are pretty much identical and have the same actual level, peaking at 0dB. the actual tone sweeps they were made from aren't like that, they are actually at the levels the impulse file names say. so the impulse at -43.50dB was made from a tone sweep that had it's max amplitude at -43.50 dbfs, for example. anyway, as you can see, all of those impulses are pretty much the same (which is because they weren't actually ran through any hardware, so they are pretty much digitally exact), except for the lowest one which i've reduced in level drastically.


now, here's how nebula works with its handling of dynamics. it 'starts' from the lowest level samples that a program has, always. if a sound comes into nebula, the envelope follower detects it, and starts trying to play the impulse matching the level of that signal (actually it interpolates between all of the different impulses but you get the idea). but the main thing is that it starts from the lowest sample. in this case, the one at -43.50dB. so basically, if a drum or something loud happens really fast, the envelope follower has to quickly yet smoothly transition from that lowest sample to one at the level of that transient, to play the appropriate impulse for it.

lets say there is silence going into the program, then suddenly, a transient reaches -8db. then nebula has to smoothly transition from the sample at -43.5dB up to -8db, and it has to go through (interpolating) all the ones in between for this to be a smoothly done thing.

so what's the point of all this? place the xmls and folder as i said and load the program (they are in the PRE category) called '20ms prog rate freqd' and run a simple drum loop through it, or even just a few isolated drum hits with notable attacks that actually peak a fair amount above the rest of the drum. load an oscilloscope type vst after nebula to see what's happening, or you can just render to wav and look at it in an editor. now bypass the program and see what happens. turn it back on. the program makes the transients disappear. now load the other program called '2ms prog rate timed' and see what it does. i shortened the length of the kerns so a decent computer should still be able to run the program. anyway, you can see that with this one, the transients are still there. they don't get dropped out like they do with the other one that uses 20ms prog rate.

what does this prove? it proves that any program using a 20ms prog rate is playing the lowest dynamic sample/impulse in that program, for the transients, because it can't react quickly enough to get up to the louder level ones as it should, for it to be called 'accurate' with how it responds to transients. does it make sense that a transient at -8db should be processed with a sample that was sampled from a preamp at -43.5dB? no. it doesn't. and yet that's exactly what happens, unless you have a faster program rate that allows it to get to those higher levels faster.

i have some pngs in the zip that show my own results doing this test with a kick and snare alternating, using smexoscope.

anyway, here is the zip with the programs and pics. oh and you might look and think 'well, it does miss the first bit of the transient but not TOO much', consider that i only dropped the level of the very lowest sample. as soon as the envelope follower moves away from that lowest sample, it's playing samples that are at full level, but are still 'sampled' from very low levels. let's say that again, you have a transient come out of silence, and the transient peaks at -8db. all i've really shown is how long nebula lingers on the impulse sampled at -43.5 db before moving up to the next one, which is sampled at -42db. but what if we could see how long it takes to actually reach the ones at around -8db? it'd look worse. in fact, it doesn't reach them in time. the transient is already gone and over by the time the envelope follower gets the higher impulses playing. so you never get impulses played to match that transient at all. it's totally missed. not only that, but it's probably still not playing the appropriate level impulses for what comes immediately after that transient/drum attack.

and again, with a high-end clean preamp that hardly matters, because those things were designed to provide similar results at all levels up to the point of clipping. but with something that has lots of non-linear stuff going on, it can and will matter, in terms of accuracy. a piece of tube equipment may have a slight, very subtle compression effect that increases as the level going in rises. transients will instantly get that subtle compression. but not with a 20ms program made from that piece of a equipment... the same goes for the increased level of harmonics that should occur at that higher level. and i'm not trying to say things that use 20ms program rates suck, just that faster program rates with timed mode can improve things, if it's done properly (with lots of testing and looking out for artifacts).

RJHollins
Expert
Expert
Posts: 3714
Joined: Sun Mar 28, 2010 5:53 pm

Re: Changing XML file for high quality sound and rendering?

Post by RJHollins » Fri Aug 30, 2013 8:03 am

This is a very interesting AND important topic.

Maybe I ask a favor of ALL participants ... please.

First ... this is not a new subject ... but it has not seem enough participation, particularly from Developers.

I sincerely hope ... in the true spirit of all Nebulites ... to remember some things. We ALL are using NEBULA because of one thing ... NOT its' convenience, the small footprint, the ease on our CPU's, or real-time capabilities.

Each one of us are here because of the sonics.

There have been many questions, and speculations along the way. We have all read the 'newcomer' seeking to jump to the fringe. [this is all fine, as not everyone playing with plugins and DAW's are solely in it for a Professional living. There are some here wanting to experiment, play, have curiosity ... etc.

If 'We' can place each question into the realm of 'General Consensus', as opposed to personal redirect ... I think we can ALL benefit beyond Our current experience.

Let's not solo out one set of comments, because, the points raised have NOT come from a single source.

As a pure User [one who has NOT delved into the land of NAT ... in fact, my version of NAT crashes, and I hope gets fix!] :evil:

I stated many moons ago that only the Dev has access to the actual hardware being sampled, and would be helpful to the community to share insights from these direct experiences. To this point, thanks to 'Cup' [and few others] that have spoken out [sincerely] on this.

We are just hearing of some of the 'finer' details of the Nebula engine and indications of its functioning. Those that have deeper DPS understanding can surmise this to a deeper level. For those of us of the Audio Engineer persuasion, we know one thing ... the 'Stock' Nebula libraries sound really good ... at least good enough for us to put up with all the effort to use them.

For those, either looking to push the envelope ... or maybe just to get a better understanding of things Nebula ... let's maintain the proper perspective in the conversation.

As said, there is nothing here from 'words of caution', to the confusion as to how audio gets Nebulitize that many Members here have or could raise.

I'd be quite certain that <G> could teach everyone here [including the high end Devs] some deeper level of understanding if he so chose.

Let us also keep in mind the computers we are trying to work with in order to do our work. We do need some 'common' base level to design for [practicality]. As we explore the outer fringe areas, we may have to keep it at 'educational conversation' due to limitations in our computational power. However, that should not stunt the conversation.

Let's not forget that some aspects of 'pushing to the edge' can influence Users attitude or perspective ... not that the 'push' is bad ... but practicality, or usability are essential. The stock Nebula library can already force us easily into a cumbersome work flow, let alone a further 'push'.

As Nebula Users, we have to accept this for the end result. Staying out of the 'personal', and framing the science, mathematics will help pave to insights and an improved understanding for those interested.

I hope my comments are taken in the proper Nebula spirit.
8-)
i7-5820k, MSI X99A Plus, 16 GIG Ram, Noctua NH-D14, Win-7 Pro [64-bit], Reaper-64

NVC [Nebula Virtual Controllers]

brp
User Level IX
User Level IX
Posts: 99
Joined: Tue Mar 30, 2010 1:02 pm

Re: Changing XML file for high quality sound and rendering?

Post by brp » Fri Aug 30, 2013 9:53 pm

thank you cupwise for shareing these example!

it shows exactly what i was talking about:

transient happens, envelopefollower detects it, next prograte will play the signal convolved with the respective impulse. even with very high progrates when kernels would switch fast enough to handle the transient correctly, at the beginning of this fft-window you'll get a fade from the windowfunction. and where is our transient? right, at the beginning of the window!! now everyone should see the problem with freqd vs. transients! what you can try is to set lookahead half the time of prograte or buffertime or kernellenght or whatever the length of the fft-window would be, so that at least some of the transients will get a chance to be somewhere in the middle of the window and therefor being processed (allmost) correctly...

with a little bit of brainpower you'll now see why g(enius) have introduced the split mode ;-)

@rjhollins

i think you see it perfectly right! in the analog days engineers got into electronics to mod and improove theyr gear. now we have kind of a similar situation within the digital world, at least with nebula. it is kind of a plugin on which you can screw off the housing and solder here and there and experiment with it. of course that imply that the stupid people think they'd have to solder something inside this box just because everyone does... this is just a normal human phenomenon: look at all those teens tuning theyr cars by makeing them worse ;-) humans really are some of the very funniest animals on this planet, we just can accept this as a fact and trying to be better ourself in case we dont feel comfortable with that. i find myself often to be exactly one of those funny, strange animals altough i give my best to be not *smile*

Post Reply