On RSA: You're right, but this is only the tip of the iceberg. There's a concept, "Impagliazzo's five worlds," which explains in fine detail how the various P!=NP and P=NP scenarios impact cryptography. Basically, whether P equals NP could be an easier question than another open question, whether one-way functions even exist. There are cryptographic practical implications.
On multiplication: You are correct that DTIME(n) ≤ DTIME(n^2). However, P = DTIME(n) + DTIME(n^2) + DTIME(n^3) + …
If you would like to consider that only some small sections of P are tractable, then that's an understandable stance to take. There is, as I've mentioned before, an entire subfield of computer science dedicated to trying to move matrix multiplication from DTIME(n^3) to DTIME(n^2).
On approximation: Integer programming is simply harder than approximation. That's all. The fact that the approximations are tractable doesn't slake our thirst for the precise answers. Moreover, we don't know how BPP and NP relate yet, so it could be that there are ways to solve problems exactly in NP by randomized algorithms. We've found a few hints at this already, but derandomization is a hard process. (We don't even know if BPP=P, although P≤BPP and BPP is self-low. The holy grail of derandomization would be to derandomize BPP to P.)
Finally, I really encourage you to take a bit of time to read the Aaronson paper I previously linked, because it explains all this in much more detail. You are free to continue to think that the problem is not interesting, but I encourage you to at least get a sense of why it is hard.