I would argue that a loop constructs an answer, while recursion deconstructs input.
There's a duality, so you can port algorithms between the two models, but there is an actual distinction.
For one thing, all loops can be trivially unrolled into a tail-recursive function that most languages will trivially transform back into a loop. Clearly these transforms have no bearing on the nature of the computation.
If asked such a question, I’d probably just use goto. (Bonus points if candidates use a goto on one part of one question I ask, since it saves a minute or two of whiteboard cut and paste, BTW).
That there's a transform between them doesn't make them the same thing, it makes them possible to use for each other in programming. You even seem aware that there's still a distinction -- you're just not sure why it matters.
> Clearly these transforms have no bearing on the nature of the computation.
Depending on the nature of the job, not understanding why you can transform between loops and recursion is big points off. (You can go back and forth because while not the same thing, they're duals, which means you'll be able to find similar structures in both for certain classes of problems.)
That's why you need to understand what induction vs co-induction is -- so you can prove things about the transforms between them, such as that it doesn't impact the result to change the algorithm in a particular manner.
Similarly, you're only talking about really simple inductive or recursive behaviors -- what about complex cases? Are all inductive structures transformable to co-inductive (or the other direction)? What does it do to computational complexity when you make the transform?
That there's an uninteresting kernel in the transform doesn't make the entire transform trivial -- it just means you're only used to working in the "well-behaved" portion of it. (And that there is such a kernel is itself an interesting fact about computation!)
> Clearly these transforms have no bearing on the nature of the computation.
Just to reiterate how off-base I find this comment: there's entire huge collaborations in mathematics and academic computer science exploring the nature of those transforms because understanding how they can be applied is essential for moving forward with things like mechanically/formally verified software.
I get that not everyone is going to know that distinction and its impact -- but the distinction absolutely matters. As you astutely point out, it's used by compilers routinely -- and they certainly need to be sure the transform is well-behaved in the cases they apply it!
I’m not sure what you’re trying to get at...