Further, the list succumbs to the cardinal sin of software security advice: "validate input so you don't have X, Y, and Z vulnerabilities". Simply describing X, Y, and Z vulnerabilities provides the same level of advice for developers (that is to say: not much). What developers really need is advice about how to structure their programs to foreclose on the possibility of having those bugs. For instance: rather than sprinkling authentication checks on every endpoint, have the handlers of all endpoints inherit from a base class that performs the check automatically. Stuff like that.
Finally: don't use JWT. JWT terrifies me, and it terrifies all the crypto engineers I know. As a security standard, it is a series of own-goals foreseeable even 10 years ago based on the history of crypto standard vulnerabilities. Almost every application I've seen that uses JWT would be better off with simple bearer tokens.
JWT might be the one case in all of practical computing where you might be better off rolling your own crypto token standard than adopting the existing standard.
Having read a bit into the topic, I'd +1 avoiding JWT. Getting authentication "right" is difficult. I think most applications should default to using stateful authentication. By the time you actually need stateless authentication "to scale", you'll hopefully have enough experts on-board to help you understand the tradeoffs.
If you don't set up centralized auth checks and instead prescribe !!!CONSTANT VIGILANCE!!!, you're just setting yourself up for an auth bug in a hastily submitted pull request at 4 pm on a Friday afternoon, when someone is lethargic and ready to head out for the weekend. The code is going to get committed, then pushed to production after three people write a quick "LGTM!" Three months later a bug bounty is going to come in with a snazzy report for you (hopefully).
The better thing to do is 1) abstract all authorization checks to a central source of authority and 2) require the presence of this inheritance for tests to pass before deployment.
I disagree. Much better to have a single endpoint which does nothing except validate opaque requests and passes them upstream. No good ever comes from having crypto code mixed up with non-crypto code.
This is probably the first I've heard from someone I know is more than just some random HN commenter that JWT is not recommended.
https://news.ycombinator.com/item?id=14292223
I really ought to just suck it up and write a blog post.
This is really surprising to me. I use Play! framework and the whole play framework community suggests to use JWT for authentications as Play! doesn't support sessions out of the box. Is it just JWT itself is bad or how developers use it is bad? Just a noob question.
This isn't the first time I heard this claim, but I've also read that vulnerabilities were related to libraries and implementations, not the standard itself. Is it true?
To me, I don't see the benefit of passing meaningful information via JWT, and it might even pose a risk.
>simple bearer tokens
I guess you mean cryptographically secure random byte strings?
1. You could just generate random session IDs (UUIDs or 128-bit base64 strings) and store them in your database or in a persistent cache like Redis. Most OAuth middleware offer this functionality already. Drawback: Scalability - but in most cases you don't need it.
2. Use an alternative format that doesn't provide all the features of JWT, but provides better security: Fernet or Macaroons. https://github.com/fernet/spec/blob/master/Spec.md https://github.com/rescrv/libmacaroons
You'll need to implement claim validation and expiry validation all by yourself.
Fernet is probably better for you if you don't need the killer feature of macaroon (stacking caveats). The payload can be anything, but if you really like JWT you can always stick a JSON-encoded JWT payload inside the token and use your favourite JWT library to verify it.
If you want to support use cases like delegation or claims verified by third parties, Macaroons are worth a look.
Drawbacks: - Limited programming language support. - No built in mechanism to support key rotation (like JWT header kid). You'll need to roll your own.
3. Roll your own crypto. This not as bad as it sounds, since you could (and should!) use the NaCl/libosodium primitives. The only difference between NaCl secretbox and Fernet is that the latter includes a timestamp - which you can easily add on your own.
> For almost every use I've seen in the real world, JWT is drastic overkill; often it's just an gussied-up means of expressing a trivial bearer token, the kind that could be expressed securely with virtually no risk of implementation flaws simply by hexifying 20 bytes of urandom.
This is then to say "generate a random number, give it to the client, accept that same random number in the future as evidence of the client's authorization". A familiar form of that would be a session cookie whose content was generated by a cryptographic random number generator. The session cookie is an index into a database that indicates the properties and authorities that that particular session does or does not have.
Now I guess the reason people may like JWT is that they don't have to have a database or store of tokens that they're issued and what authority each one connotes, because they can verify the signatures on the JWT and then believe the payload. And one system can issue authorizations that another system can consume without direct communication between the two. I think these believing-the-payload properties are a part of what Thomas doesn't like.
Security by blacklisting is a bad idea. You don't need to look far - it's JWT libraries that could be fooled into accepting public key as a symmetric key [0] so even if you fix the noop bug you are still vulnerable. That's what's wrong with JWT - you always have one more issue than you think.
[0]: https://auth0.com/blog/critical-vulnerabilities-in-json-web-...
Why not? If it's an API meant to be consumed by a server I don't see what the problem is.
It's fragile to request smuggling attacks too, because the password is not entangled with the request, just next to it.
We have lots of mechanisms that do better than both of those: client certs beat the first, and HMAC of the request and key headers with a secret beat both.
It's a SO article on security for web transactions.
[1] https://stackoverflow.com/questions/549/the-definitive-guide...
You can learn and run automated tools for 6 months and end up knowing 1/3rd of what a great pentester knows.
If you want to know you can resist an attack from an adversary, you need an adversary. If you want to know that you followed best practices so as to achieve CYA when something bad happens, that's a different story.
But honestly the security picture is so depressing. Most people are saved only because they don't have an active or competent adversary. The defender must get 1,000 things right, the attacker only needs you to mess up one thing.
And then, even when the defender gets everything right, a user inside the organization clicks a bad PDF and now your API is taking fully authenticated requests from an attacker. Good luck with that.
Security, what a situation.
Which is not to say that it doesn't help.
As a pen tester, I'd much rather they tick all the boxes and save money because now I don't have to report all the low hanging fruit (which is fun the first two times you pwn an application but gets boring quickly -- I'd rather have something interesting to test).
There's no mystery to what an app. security tester does really, and getting the basics of app. sec right early in the development lifecycle is probably the most important piece of having a good solid app.
Sure get a tester in at the end to poke it and find edge cases and weird security bugs, but for a new app. getting someone in the early phases of development to provide security architecture advice is probably more important.
Given we're talking about APIs, we avoid many of the UX problems, but it feels like taking on a different set of problems than just using a bearer token. It does provide baked in solutions for things like revocation and expiry though.
Web developers in general are more familiar with other forms of authentication so unless you have a strong reason for picking TLS client certificates I would suggest picking something else.
In other words: I would be more likely to try out an API if it was based on Basic Authentication. ;-)
On the other hand some companies use them even for browser clients for passwordless authentication.
TLS client certs are nice if everyone knows what they're doing, but in a lot of orgs that just isn't the case.
There is absolutely nothing wrong with the implicit flow if the application (including in-browser ones) is requesting the token for itself (and not for some server or any third party). In case of a standalone app that would be just an extra meaningless step.
There is a slight difference in presence/absence of refresh token, though, but that would make implicit flow more secure (because, if standard-compliant, there won't be any refresh tokens at all), not less.
In case of a browser, the token would end up in the browser's history, but given that a) if browser itself is compromised game is already over, and b) that it's not possible for other parties to access the history (besides some guesswork that doesn't work for tokens), paired with a fact that c) such tokens should be short-lived, it's not a big deal.
> User own resource id should be avoided. Use /me/orders instead of /user/654321/orders
This has absolutely nothing to do with security. TBH, I don't see any issue if /me/ would be a redirect or an alias for /user/654321/. That may make perfect sense if a conceptual purity is desirable ("for each object there is one and only one URL - the canonical one"), with its pros and cons.
> Don't use auto increment id's use UUID instead.
Similarly, that barely has anything to do with security. One just has to understand that sequential IDs are trivially enumerable (and an obvious consequence of this fact - that API consumers would be able to enumerate all the resources or, at the very least, estimate their cardinality).
And as for the security - it should've probably said UUIDv4, because if one accidentally uses e.g. UUIDv1 their IDs would lose the unguessability.
Can somebody explain this?
This goes hand in hand with abstracting all authorization checks to a single gateway/middleware layer that each call inherits, rather than a spot check per call or a group of checks for different groups of calls.
(This is in addition to what 'lvh and 'tptacek have said already.)
Preventing flexibility at the URL level rather than performing proper authentication strikes me as a poor decision.
For initial release I build a page that uses html buttons and basic javascript to GET pages, passes a key as a parameter, and uses web.py on the backend.
It seems like it would be a lot of work to implement the suggestions here. At what point does it make sense?
For example you can sign session IDs or API tokens when you issue them. That way you can check them and refuse requests that present invalid tokens without doing any I/O.
I'm finding issues like API servers hanging/crashing due to overly long or malformed headers all the time when I work on front-end projects.
Programming in a language with automatic range and type checks does not mean that you can forego vigilance even with the most mundane overflow scenarios: lots of stuff is being handled outside of the "safe" realm or by outside libraries.