This isn't accounting for combinatorial math. If each sample is negative 85%, then the probability that all 10 samples are negative is 0.85^10, or about 20%. If any sample turned positive (the other 80% of the time), then all will need to be retested. Instead of 1 test per sample, you're now running 0.2 * 0.1 + 0.8 * 1.1, or about 0.9 tests per sample. You've increased throughput by 10%, but not nearly as much as was hoped. Note that these numbers get more favorable with more negative tests, though.
The general formula is (r^x) * (1/x) + (1 - r^x) * (1 + 1/x), where r is the percentage of negative tests and x is the number to mix.
With an 85% negative rate, it's more beneficial to mix 3 tests at a time for 0.72 tests per sample. (2 samples gives 0.78 tests, 4 gives 0.73) As more samples come back negative it becomes more advantageous to mix more, but you shouldn't mix 10 tests at a time (as opposed to 9) till you get to about a 96% negative rate, at which point you're running 0.44 tests per sample.
Edit: The binary search algorithm mentioned elsewhere would probably be more optimal, but I'm gonna do my day job instead of figuring out the dynamics of that one.