I'm not so sure about that. If it's not pushing to request record 334, why is it pushing to request record 335?
But I digress. Normally making standard web requests is analogized to looking, without touching. You have explicit authorization to go through the front door, and anything 'bad' you did inside was restricted to what you looked at.
>I believe this use/break distinction exists, but the distinction isn't something that's determined by the code or the vaguer "design of the code"; it's determined by the purpose of the service.
But then you get into the realm of having TOS be a legal, no matter how inane they are. This seems a far worse alternative.
>To drive that home, the library's hapless database admin who foolishly decides to update the list of books using her own SQL injection bug is not hacking, because she is authorized to fiddle with the database, even though, in your terms, it's bypassing the design of the code.
That's why I only said they lose the presumption of authorization. If all you know is someone SQL injected, you have to resort to other means to figure out if it was authorized. For example, if they already have equivalent access through non-code-bug means, and they simply prefer SQL injection, then there is no problem. But if they were doing it to avoid audit logs, there might be a huge problem.
>In other words, authorization is not the same as the technical artifacts involved in authorization. More generally, I don't think being bad at making software justifies people accessing it when they know it's not meant for them.
When it comes purely to accessing it, when it's non-HIPAA/etc. data, I don't think there needs to be very much justification.
And I don't see 'has no password' as a technical artifact. Details of web servers don't need to be involved here. The design is wrong on a fundamental, user-understandable level.