I run a 1.0a service provider and write clients against it. I’m thinking about wading through the current 2.0 draft, picking out the relevant parts to small startups with an API, and publishing a post about how to implement the sane parts of the 2.0 spec.
We would be better far off it a single company/dictator (like, shudder, facebook) came up with a simple, competently designed one page authentication mechanism, provided some libraries in the popular languages and we all just went with that.
I was pretty happy with this result since we could write a simple page like https://developers.facebook.com/docs/authentication/server-s... which conformed to the spec (http://tools.ietf.org/html/draft-ietf-oauth-v2-12#section-4....) and was an easy to implement explanation of authenticating a user.
But the OAuth 2.0 spec we were working off of is now eighteen months old and as Eran said the vast majority of those contributors have drifted away from the effort over this past year :-\
I found those fields were 90% of the problem with OAuth 1.0A implementations. Maybe there's security value in those parts in an SSL environment I am missing, but I doubt it since SSL does the exact same thing.
One major one is that, often, the participants come from different areas with different perspectives and visions of the outcome. Since participants rarely go to work with a firm set of agreed upon requirements or use cases, it leaves each member room to craft their own understandings of the goals. I've seen way too many working groups attempt to create a 'master' spec that takes into account all possibilities. Or, alternatively, clusters of people form from similar problem domains, and powerful clusters can take the work off course.
A second major problem is that there often is a lack of real user participants. Standards work is as close to the dullest engineering work one can get. Worse, it seems to attract certain types of engineers that love building specifications. Because of this, usually the real users flee immediately. This usually leaves a body of over-educated, overly-technical people to argue a lot over sometimes irrelevant details. Those types of people are definitely necessary for the standard to work in the end. But, because the real users flee, their influence is usually unchecked.
A third reason is that working groups rarely seem to use iterative and incremental life cycles. There's rarely any working code, often little V&V, and participants and users often can't experiment with their ideas. As we know, what's good in theory, sometimes doesn't work well in practice.
I think there are systematic reasons much standards work fails. The 'design-by-committee' outcomes arise from 1) lack of firm use cases to bound the work, 2) dissimilarity between participants, 3) lack of real user participants, 4) lack of iterative / incremental cycles.
Once upon a time, the IETF claimed to be interested in "rough consensus and running code". "Rough consensus" went out the window a long time ago; I suppose running code had to follow sometime.
Perhaps the WG would produce a set of use-case prototypes which proposals are built on.
It's fascinating.
I sometimes think that these people who puch for the "second system" really have no idea the problems they are bringing about. That is, it is innocent.
Hate me for saying so, but there are just a lot of people working in software who lack a sense of wisdom. Folks like Fred Brooks who can see the madness are few and far between. Even rarer are those who both see the stupidity and take action to stop it.
And if it's different people doing the second one, watch out.
Firstly, reducing the burden of tricky and unnecessary crypto code on the client is useful.
Secondly, some of the article's points don't even make sense, like saying tokens are necessarily unbounded, which isn't true. The issuer can easily include the client_id in the token and check for its revocation when used, as it did in OAuth 1. The same is true for self-encoding: clients don't have to issue self-encoded tokens and can instead issue unique id-style tokens with long expiry times. As for refresh, that's unfortunate but issuers could easily work around it if the OAuth 1 way was preferable.
In short, OAuth 2 is simpler to implement for the client in exchange for being slightly harder on the issuer, whilst also being more flexible. Yes, it relies on SSL for security. So does your bank.
My point was that OAuth 2 improved in a number of ways for clients and is at least as flexible for the issuer as OAuth 1, so I think the author is just disturbed by the trust of SSL for security, and the crappy, slow standardisation process, and ended up going overboard.
You're then stuck trying to find out where you had gone wrong with no guidance. The last time was due to an incorrect content type on the post. A coworker accidentally had the key and secret the wrong way around.
Both scenarios has the same error and you're often stuck groping around for a solution.
Bunch of people in a committee take three years trying to build the security token system to end all security token systems and have yet anything to show for it and we are sad?
Why are people trying to do this anyway? oAuth is just an idea. Hey here's a really good way to handle things and if you do it this way it has some really great benefits.
Why aren't these things like javascript frameworks where everyone has an idea. I don't think it's practical that every sdk and framework will use one security system that was agreed upon. It's just not going to happen. Everyone has unique requirements.
I think he's just upset that more people have concerns and needs and nobody can compromise to solve all of them. Well yeah. Naturally. They wouldn't be needs if people could just overlook them for someone else's idea on how to do it. They would just be problems people are looking for someone else to solve.
"workable code" is a tangent and really has nothing to do with the reason we avoid Microsoft protocols. More to the point we want to avoid APIs with elements that facilitate vendor-specific implementations i.e., lock-in. Like MS' OOXML, Oauth 2.0 has special interest written all over it.
Have you ever tried to write an interoperable authentication system using Active Directory? I'm particularly thinking of the UDP LDAP query and the multiple-byte-order (little-endian and big-endian!) response.
"Hey here's a really good way to handle things and if you do it this way it has some really great benefits."
Because it doesn't really work unless everybody does it the same way.
That doesn't disprove my point. Just because you don't like their approach doesn't mean they don't get points for having an approach. So far oAuth is vaporware and not consistent in almost every implementation yet still effective because it's just an idea.
Because it doesn't really work unless everybody does it the same way.
I disagree. It's not hard to adapt to using oAuth+Twists for a given provider. It's not like it's some secret handshake nobody knows and you can't get into the cult meeting. It's just signing data and exchanging tokens. We don't need a universal standard. We need a universal understanding of the problem we are trying to accomplish and various recommendations for how you might solve it. I think the work on oAuth is already complete.
Because OAuth is a protocol designed to enable systems developed independently and as such it's useless unless there's an high degree of standardization. It's like saying "why can't we all use our custom version of IP/TCP/HTTP/TLS". It simply wouldn't work.
Everyone has unique requirements.
Not really; the reality is more "Not everyone has the same requirements", which still leaves very large groups that do have the same or similar enough requirements; in fact, we've seen that with OAuth 1.0(a).
Yeah I totally disagree. It could be like any other system, just have a .NET dll, a Ruby gem, whatever to facilitate the basics of that protocol. There's nothing amazing about oAuth. It's hardly a protocol in it's own right. It's just an agreement on transferring some data (some signed, some not signed) on top of another protocol. There's no magic sauce. You don't need standardization because anybody could build a Ruby gem to support any variation of it. Whether people choose to do that is a different question.
What is the point of a standard that cannot be implemented the same way twice? It's insane.
That said, most smaller vendors stick to the sane bits, its the big guys like Intuit or Microsoft that over-engineer their auth and pull out every fiddly feature in the spec.
This was both a sigh of relief but also slightly horrifying when I was working on an oauth2 server in Node. It encourages a lazy "implement the parts we care about and that are required, and take shortcuts for unspecified things." I thought the separation of access tokens and refresh tokens was wrong because once you're just giving the client an encrypted string to avoid certain DB lookups later you can put whatever data you want in it (the spec doesn't care) including managing refreshes, revokes, etc. I like the idea of expiring tokens of course, but it would simplify the client significantly to just replace the currently used token with a new token issued by the server if one is returned. I recall the 'standard' flow is "request with access token, fail, request new access token with refresh token if you have one, maybe succeed, maybe get a new refresh token, if succeed request with new access token". Having the access token manage the data to refresh itself is simpler. I'd agree it's bad that token security is reduced to cookie security by default, really the whole rant is spot-on.
http://webcache.googleusercontent.com/search?q=cache:lDQVFky...
(his blog seems to be defunct, hope that's not permanent)
Seems like OAuth WRAP has been officially deprecated in favor of OAuth2.0 but given these issues...
Exactly the same thing happened to the whole Semantic Web effort at W3C. It basically got overtaken by enterprise and now it is of little interest or use to regular web developers.
The W3C semantic web efforts were run by academics, and were never of interest to enterprise OR to web developers.
That right there is what killed OAuth 2.0. From day 1 these members didn't have the specification as the highest priority. They were only thinking of how the specification could serve their own ends. This isn't unique to the enterprise world, but that mindset has more than its fair share. The web community represented the group that put the specification as it's highest priority. When the specification was perverted, they left.
I do realize that it's slightly more complicated for the client developer, but all things considered I think documenting my API in the best possible way will outweigh the perceived disadvantages.
Upon reading the spec, it seemed that OAuth2 is really just some rough guidelines. Pick and choose what you need for the particular flow you're implementing.
If the enterprise world wants a standard, let them make their own.
Maybe next time it'll work!
In short, it's collective name for many Web Services specifications that are still heavily in use in "enterprise" world. It's a mess.
That does not make it a mess.
WS-Trust, WS-Federation etc. have already solved problems that OAuth 2.0 attempts to solve, which doesn't make it bad, as Eran states. Bad is subjective.
Whether someone chooses to use it or not depends entirely on the requirement. Choice is good.