reference: https://people.umass.edu/phys286/Propagating_uncertainty.pdf
disclaimer: it will be a relatively small effect for just two resitors
aleph's comment is also correct. the bounds they quote are a "wost-case" bound that is useful enough for real world applications. typically, you won't be connecting a sufficiently large number of resistors in series for this technicality to be useful enough for the additional work it causes.
You could take a 33k Ohm resister with 5% tolerance, and measure it at 33,100 +/- 200 Ohm. At that point, the tolerance provides no further value to you.
In reality, some manufacturers may measure some components, and the ones within 1% get labeled as 1%, then it may be that when you're buying 5% components that all of them are at least 1% off, and the math goes out the window since it isn't a normal distribution.
I expect that in this case the uncertainty would decrease
Eg. 1 resistor slightly above desired value, and a much higher value in parallel to fine-tune the combination. Or ~210% and ~190% of desired value in parallel.
That said: it's been a long time since I used a 10% tolerance resistor. Or where a 1% tolerance part didn't suffice. And 1% tolerance SMT resistors cost almost nothing these days.
f(x) = 3/(1/x + 1/110 + 1/90)
g(x) = 1/(1/(3x) + 1/(3110) + 1/(3*90))
Seems to show that 100 is a stable attractor.
So I will postulate without much evidence that if you link N^2 resistors with average resistance h in a way that would theoretically give you a resistor with resistance h you get an error that is O(1/N)
Complete nonsense. The tolerance doesn't go down, it's now +/- 2x, because component tolerance is the allowed variability, by definition, worst case, not some distribution you have to rely on luck for.
Why do they use allowed variability? Because determinism is the whole point of engineering, and no EE will rely on luck for their design to work or not. They'll understand that, during a production run, they will see the combinations of the worst case value, and they will make sure their design can tolerate it, regardless.
Statistically you're correct, but statistics don't come into play for individual devices, which need to work, or they cost more to debug than produce.
For example, say you're adding two 10k resistors in series to get 20k, and both are in fact 5% over, so 10,500 each. The sum is then 21000, which is 5% over 20k.
The Central Limit Theorem (which says if we add a bunch of random numbers together they'll converge on a bell curve) only guarantees that you'll get a normal distribution. It doesn't say where the mean of the distribution will be.
Correct me if I'm wrong, but if your resistor factory has a constant skew making all the resistances higher than their nominal value, a bunch of 6.8K + 6.8K resistors will not on average approximate a 13.6K resistor. It will start converging on something much higher than that.
Tolerances don't guarantee any properties of the statistical distribution of parts. As others have said, oftentimes it can even be a bimodal distribution because of product binning; one production line can be made to make different tolerances of resistors. An exactly 6.8K resistor gets sold as 1% tolerance while a 7K gets sold as 5%.
That's incorrect. They, by definition, guarantee the maximum deviation from nominal. That is a property of the distribution. Zero "good" parts will be outside of the tolerance.
> It will start converging on something much higher than that.
Yes' and that's why tolerance is used, and manufacturer distributions are ignored. Nobody designs circuits around a distribution, which requires luck. You guarantee functionality by a tolerance, worst case, not a part distribution.
That's kind of overstating and understating the issue at the same time. If you have a skewed distribution you might not be able to use the central limit theorem at all.