I probably would have written it with a single loop, using the `enumerate` iterator adapter. But in Python, two loops is almost certainly more efficient.
Having only one loop gives you a better memory access pattern, because it's 2 XOR operations in between each memory access. Two loops is the same number of instructions, but it spends one loop ignoring memory and then another loop doing rapid-fire memory accesses. In python there's enough overhead that it's unlikely to matter. But in a faster language running on a busy machine that could make a real difference.
Is pretty much a standard for loop. Between that and
for n in numbers:
You can do pretty much the same things as a more conventional language.
You could also solve it pretty simply like this:
expected_sum = (n * (n + 1)) / 2
missing_num = expected_sum - sum(numbers)
This only iterates the list once. This would probably be my solution if I was given this task in an interview.
You seem to believe that "O(2n)"
for value in range(1, n + 1):
result ^= value
for value in A:
result ^= value
is slower than "O(n2)" for value in range(1, n + 1):
result ^= value
result ^= A[value-1]
simply because the latter has one "for loop" less. Am I misunderstanding you, or if not, why would this matter for speed?Whether it's a win in Python to use one or two loops isn't so clear, as a lot is hidden behind complex opcodes and opaque iterator implementations. Imperative testing might help, but a new interpreter version could change your results.
In any case, if we want to nitpick over performance we should be insisting on a parallel implementation to take advantage of the gobs of cores CPUs now have, but now we're on a micro-optimisation crusade and are ignoring the whole point of the article.