People love to talk about how the web is about open standards and such, but it really is rather quite closed.
It's driven less by standards and more by de-facto implementations. Soon we can get rid of the standards committee and just talk to the implementers of webkit to define the "standard".
And I think even worse has been the wholesale discounting of plugins. I still strongly believe that being tied to JavaScript as really the only client side language is a mistake. It's not a great IL and limiting the language for such a pervasive platform is scary. I powerful plugin model would be, IMO, one of the best things to a truly powerful web.
I wish the web was more open. I wish that browsers were a truly extensible runtime that specified lower level abstractions, that allowed more innovation at the top of the stack.
It feels like we're walking into the dark age of the internet.
Lets start with the thesis statement, "The web is not open and becoming increasingly less so."
On its face, this statement is not only false, it is painfully so. Sort of like saying the world is not round and becoming increasingly less so. To pretty much anyone they would say "the world is clearly round, and its impossible to change that." Similarly there is absolutely nothing standing between Ken, or anyone else, preventing them from building an entirely different "web" just like Tim Berners-Lee did back at CERN. So the definition of the word 'open' here clearly is needs some additional verbiage.
The next statement helps a bit, "It's driven less by standards and more by de-facto implementations. Soon we can get rid of the standards committee and just talk to the implementers of webkit to define the 'standard'."
This deliciously captures a debate that has raged for 30 years at least. "Which came first, the standard or the code?"
Back when I was young and impressionable, the debate was something called the "ISO 7 layer networking model" and "TCP/IP". You see the international standards organization had decided that the world needed a global networking standard, and so they got their best standards engineers together to come up with what they gloriously christened the "Open Standards Interconnect" or OSI set of protocols. Meanwhile a scrappy bunch of network engineers and hacker types were in this loose knit organization called the Internet Engineering Task Force who were building networks that spanned countries and they wrote code and brought it to the meetings and debated stuff that worked well and stuff that didn't work well, and then everyone went back and wrote more code, Etc.
The forces of evil ignored the IETF and focused on the ISO working groups, since the latter were making standards and the former were just playing around with code.
As it turned out, working code tended to trump standards, and a process of debating changes to the system using working code vs debate using 'it should/it might' as opposed to 'version A does/ version B doesn't' meant that changes got made to standards based on a convincing argument that had never been tried or experienced in practice. The result was that the OSI standards had a lot of stuff in them to avoid issues that weren't issues, and were missing responses to things that actually were issues.
A number of people found the 'code first, standard later' methodology superior for that reason. Assuming that the code was available and unencumbered by patent or licensing restrictions. The latter of course became a much bigger problem when the focus switched to the IETF and the 'big guns' started their usual games.
My first response is then, "open" means anyone can implement and contribute new stuff. And by that definition the web is very open. However, since the community favors a implementation model over a theoretical standards model the 'cost' to influence change is you have to write code, as opposed to making a good argument. And that disenfranchises people without good coding skills.
The second part of this screed is prefaced with this: "And I think even worse has been the wholesale discounting of plugins." Which speaks to the other side effect of "open" as in "we don't make any boundaries we don't have to."
From a mass adoption point of view, the more variability you have in an experience the harder it is to capture the larger audience. This is why cars and motorcycles are all driven in basically the same way, Televisions have 'channels' and 'volume' and browsers have an address bar and bookmarks.
The unification of the structure allows for mass learning and talking in generalizations that remain true without device specific knowledge. Can you imagine how hard it would be to write a driver's code if every vehicle had its own customizable UI and indicators?
So as a technology matures the variability is removed and the common practices are enshrined into the structure.
What that means in a practical sense is that if you're trying to push the envelope in such a mature technology you will face higher and higher resistance. However, you are always allowed to create an entirely new way of doing things.
This isn't the 'dark age' it's the post renaissance start of the industrial revolution. Except instead of widely accessible books we've now got widely accessible information and a de-facto portal for accessing it.
It's hard to say whether the parent is correct about the dark age but such a thing clearly has been the case in the past with regards to standards. There was a time before in the not-so-distant past that browser vendors, particularly MS, did not care much at all about conforming to any sort of standards and created mess for which things like jQuery were partially created to solve. So I think there is real ground the parent's point. The issue is whether it is really getting worse, still.
One thing I think is different from previous years is that the programming community is less accepting, I think, of totally non-standard, even weird, proprietary implementations.
I seem to flip flop back and forth on this, personally. Some days the standards process seems bright and cheery, at other times, I fear Apple/Google/Microsoft are running the show.
Less than a decade ago you made sure your web sites ran well in Internet Explorer, a closed source browser that was allowed to stagnate. IE took the W3Cs standards as more like "guidelines" and not a specification.
- Today, every major web browser (except IE) uses an open source rendering engine (or the browser itself is open source.
- Every major web framework and library is open source.
- Most of the servers running the web are powered by an open source OS.
- The standards bodies are actually working faster than ever on new version of Ecmascript and HTML.
- IE's marketshare is smaller than ever.
Years ago some guys had a crazy idea to make a browser for KDE. Today it's powering much of the desktop web and almost ALL of the mobile web. Perhaps I just like a good love story, but it seems like this is a pretty great achievement for "open". Now, it seems pretty disingenuous to say "the web is not open" and even more so to say that just because more people are working on the same open source project that it's, "becoming increasingly less [open]."
[edited for formatting]
Despite all of this, Netscape (a company whose business model at the time relied on selling web browsers and getting contracts with ISPs to bundle their software with subscriptions) managed to get Microsoft's hands slapped so hard by the justice department for having the gall to give away a web browser as part of an operating system (something we now all take for granted: no one complains that Apple pushes Safari with OS X, nor do nearly enough people scream loudly about the fact that alternative web browsers on iOS are only possible if you use Apple's rendering engine in a crippled mode, defeating the purpose, despite Apple having near-monopoly status on the mobile web) that Microsoft never quite got back the courage to keep moving forward given the new constraints they were under. Thankfully, in the process, Netscape still died, and from its ashes arose the idea that an open-source web browser would be interesting and viable, leading to the ecosystem we have today.
Compare this to a plugin-driven environment vs a standards-driven environment. Say what you like about Flash, but it was able to guerrilla video onto the web without anyone's permission.
The version of WebKit used on iOS is actually not open source and forkable; even WebCore, which is LGPL, Apple works around: rather than releasing code changes for iOS-specific features, they release the binary .o files users can link in.
Hell: Chrome for Android isn't even open source. People tend to totally forget that "WebKit is open source" is meaningless in the general case, as the BSD license allows specific forks to be closed, and all of the mobile ones hold stuff back.
Standards are great for things like protocols (even languages), but an entire web browser is a tad more complicated than TCP or even C++. No two browsers have ever implemented HTML/JS/CSS perfectly and they never will. If that's the case, then what's the point of a "standard" anyway?
Which means that if you want hardware capable of rendering the web it can't be low-power highly-parallel hardware; it has to be high-power-consumption fast-serial-operation hardware. Why is that bad? I guess that's a matter of perspective. I think that would be a terrible outcome, personally.
I should point out that I'm not aware of any compiler that has implemented C++ perfectly, and I doubt any ever will given that it's a moving target. So why bother having multiple compilers or a C++ standard at all? For example, why does the WebKit argument not apply to gcc? And note that in compiler-land not being able to compile some codebases is OK as long as you can compile the codebase your user cares about, while the Web equivalent (only rendering some websites but not others) is a lot more problematic, because the typical compiler user compiles fewer different codebases than they visit websites. And also because using different compilers for different codebases is a lot simpler than using different browsers for different websites.
We need (good) web standards, because we need consistent web architecture where features work together and somebody does long-term planning.
It's easy to use and misuse on a large scale. What runs same everywhere is great for app and crackers alike.
Its size make it a valid target of all kinds of dubious organization to target. And not just target exploits existing, but introducing new exploits into the source.
> It's driven less by standards and more by de-facto implementations.
That was always the case. In fact, one of the goals of WHATWG (fathers of HTML5) was to standardise how the code is rendered even if it is invalid.The persuasiveness of your argument is harmed by this sort of melodrama.
By the way, the "Dark Ages" are named such because of a lack of written historical records from the Early Middle Ages. The negative connotation attached to the phrase by the general public is considered inaccurate by historians.
Your comment in 1995: Soon we can get rid of the standards committee and just talk to the implementers of Netscape to define the "standard".
And in 2001: Soon we can get rid of the standards committee and just talk to the implementers of IE to define the "standard".
And in 2006: Soon we can get rid of the standards committee and just talk to the implementers of Firefox to define the "standard".
...and for more comparison:
Your comment in the late 70's on computers: Soon we can get rid of the hobbyists and just talk to Apple to define "standard".
And in 1996: Soon we can get rid of Apple and just talk to the creators of the PC to define the "standard".
And in 2010: Soon we can get rid of the PC and just talk to the creators of the iPad to define the "standard".
Just sayin'
I've said for years that a pluggable javascript engine should be something that was fundamental to browsers. Extending it further, plugins that allowed for alternate and complementary lower-level technologies (easily embed a python engine, for example) would lead to more competition and innovation.
We just had a discussion (or, I had a rant) about this at a local web meetup last night. 10 years ago it was "IE only". We're moving in to "webkit only" these days, especially if you're targetting mobile users. In some ways it doesn't feel like we've progressed all that much.
I think it's fine to have a reference implementation, but we need a broad set of implementations (with actual users) so that the standard doesn't get blinders on it due to an implementation decision made on a de-facto standard.
You could say "Well standards document is irrelevant because no one follows them anyway" but that's another issue.
So, the internet is becoming more open, not less.
Is it a bad thing that AIX and Solaris fell by the wayside in a rush to Linux? I don't think so. So neither should adopting WebKit as a sort of common kernel in browsers, IMHO. But that's all it is.. MHO ;-)
Note that Solaris innovated with ZFS, which helped spur Linux to implement btrfs. Competition matters, even in OS kernels.
I know the linux kernel is mostly monolithic, is the Solaris/openIndiana kernel the same?
Even if it is, it seems unlikely to me that the core kernel team had much to do with ZFS.
It's really more about competition between file systems, or so it seems to me. Maybe I'm splitting hairs.
Hence, the comparison. Firefox and IE both comprise a major chunk of the market.
> as a contributor to WebKit you have the complete ability to drive it in a direction you wish (often for the better)
Not really. Follow the internal WebKit politics and you see a lot of conflicts. For example, Google wanted to push multi-VM support (for Dart) and Apple blocked that.
> WebKit is already a de facto standard
On mobile. Mobile isn't everything.
Also, should we have said "ie6 is already a de factor standard and given up"?
> I think one this is clear already: WebKit has completely and unequivocally won mobile at this point. They are nearly the only rendering engine used on the vast majority of mobile browsers, including the soon-to-switch Opera Mini/Mobile browsers too. There is no reason to worry about a slippery slope, the slope has already been slid down. In order for any other browser to remain relevant in the world of mobile (which, you must admit, is quickly becoming the only world we live in) they must keep feature parity with WebKit.
Again, this is utterly defeatist. Even if it were 99% true, should everyone give up?
> At this point it’s honestly a business/engineering decision for Mozilla and Microsoft (as it always has been).
No, Mozilla is a nonprofit and the decision would also regard whether it is good for the web, or not. I'm surprised to see John Resig not realize that - he used to work at Mozilla.
edit: And regarding the main point: jQuery worked in a space that was not standards-based. There were multiple JS libraries, and they fought for market share. No one tried to develop a standard that there would be multiple implementations for. Comparing jQuery to WebKit is odd.
In the case of JavaScript libraries virtually everyone has
standardized upon jQuery at this point.
This guy really lives on his own planet. Maybe most websites that only need to add a small piece of JS functionality are using jQuery, but I seriously doubt that "virtually everyone" writing large JS projects is using jQuery. Google Closure Tools, Sencha/ExtJS, and MooTools remain quite popular, and a host of developers are skipping compatibility layers altogether and only supporting IE9+ and other recent browser versions, particularly those targeting mobile devices.Vague deployment statistics mean very little.
[1] http://w3techs.com/technologies/overview/javascript_library/...
Sure there is a long tail of sites that do use jQuery, but most of them don't do very much or get much traffic.
If you look at jQuery's market share by aggregate user sessions or by aggregate time on site across the entire web, it does not look nearly as important.
However, on mobile (a market which Chrome has only just entered) you could say that webkit is dominant and that we can already see problems there. To be honest though, I think that's misleading. Mobile Safari has been ridiculously dominant compared with other webkits, in mind and marketshare, and it's that monoculture causing the problems we currently see.
Serious new contenders like Chrome and Opera entering the mobile market with webkit renderers will, I think, actually help that situation to some degree by actually competing with Mobile Safari and not being half-hearted also-rans.
(There's some work being done to parallelize the rendering pipeline, and a bit on painting, but parallel CSS layout seems to be completely off the table in existing browser implementations.)
It's hard to know now what the very–long-term effects will be but John's point that the web has benefited from Chrome's creation is hard to argue with. As for whether it's benefited more or less than if Chrome had used a different rendering engine, we'll never know…
We'd have been in an interesting, and perhaps worse state if Safari and Chrome were both Gecko based instead of competing therewith.
Replace "jQuery API" with "Web API" and this actually argues against John's point. Multiple implementations are better for everyone.
I'm not arguing against multiple implementations. If Mozilla were to switch to WebKit, rewrite its DOM implementation to be 20x faster, and then release that -- that would be absolutely stupendous! Much in the same way that the Chrome team created a new JavaScript engine that was much faster than Safari's JS engine. I am arguing that the writing is on the wall for the common parts of a browser. A browser vendor's time will be used much more efficiently by collaborating with each other on the implementation instead of writing a number of separate implementations.
Rewriting WebKit's DOM implementation to be 20x faster wouldn't be possible without rewriting WebKit. The DOM implementation is one of the most central parts of any rendering engine. We're working on doing that (Servo), but not by building on top of WebKit for precisely this reason.
"Much in the same way that the Chrome team created a new JavaScript engine that was much faster than Safari's JS engine."
They did that by replacing JavaScriptCore wholesale, rather than building on top of it. This was only possible because JavaScript is a standard, not defined by JavaScriptCore's implementation. If JSC had had a monopoly and the Web started relying on JSC's quirks, then V8 might never have happened.
These days performance is heavily driven by the javascript runtime. While it's challenging to write a browser engine, it is much much more challenging to write a really fast JIT'ing javascript runtime. It seems unlikely Opera would have been able to close the gap, much less surpass, with WebKit on that front.
At that point, any competitive advantage they hold in features is being offset in a fairly substantial performance penalty. Good move making the switch. Differentiate elsewhere.
That isn't really true. Sure, modern JITing js-engines are — arguably —the most technically advanced parts of a modern browser. However they are relatively small and self-contained; a suitably (i.e crazy-) talented team of engineers can get ballpark comparable performance of V8/Spidermonkey/etc. in a surprisingly short amount of time.
Most of the difficulty of making a browser fast is chasing the bottlenecks across multiple layers. For example it's no use having a super-fast javascript engine if your DOM implementation is so slow that real sites always bottleneck there. And there's no point in having fast DOM if your style layer is too slow to relayout when the underlying tree changes. And having a fast style layer doesn't help you much if your final compositing and painting are holding everything back. And of course different sites have radically different behaviour and what is an optimisation for one case might slow down a different case.
I was under the impression that Chrome uses WebKit for rendering and V8 for JS.
So, even if Opera switches over to WebKit, that shouldn't affect JS.
So why opera instead of Chrome? They'll differentiate themselves with the UI, mouse gestures, ad-blocking etc.
Just for scale, the latter takes about 2-3 years as recent history has shown, with a team that numbers a few dozen people at most. The former takes hundreds of developers, and several more years...
Remember those days? This is not a good thing. The HTML spec should be the standard, not WebKit's bugs.
IE was closed source and Microsoft disbanded the team that developed it (IE6). If it had a bug, there wasn't anything you could do about it. You just made your site work around the bug, possibly breaking it on other browsers.
In the case of JavaScript libraries virtually everyone has standardized upon jQuery at this point.
In other words, as far as he cares things other than jQuery don't have a right of existence. Although it's very popular it's probably even more arguable if there aren't better JavaScript frameworks than jQuery, compared to whether there are better rendering engines than WebKit.
I believe it comes pretty close to illustrating what I believe is so wrong about his arguments.
jQuery or WebKit being dominant platforms doesn't requite that innovation stop, it gives innovation the ability to explode: When you don't have to work about nit-picky cross-platform capabilities or standardization then you get to focus on performance and building sweet frameworks like Backbone and Angular.
I don't think he made that point at all. He is merely reflecting on the fact that jQuery has the most market share and by a long way.
With browser engines things get a little more political - mono-culture is not a good thing. But at least Firefox holds enough of a market share, and of course IE is now a much better player in this space - so WebKit can't get away with too much silliness.
Software monoculture is not inherently a bad thing; it can be a good thing. Re-inventing the wheel isn't progress. The world is much better off with Linux than 20 different Unix variants. And the world is likely better off with WebKit than dozens of different browser variants.
The issue isn't having a software monoculture, it's who is in control of it. The situation with both Linux and WebKit is that we have all the benefits of a monoculture and very few of the downsides.
jQuery and Bootstrap, no. They are useful libraries, but developers can choose not to use them and still succeed. Few of the biggest websites use them, for instance--they mostly write their own javascript and templates.
But they all test against WebKit.
Only our smaller, cobbled together pages use jQuery. If it grows, we switch to something that meets our idiomatic style better: smaller modules in a CommonJS/AMD style.
JQuery takes over your code, and it seems like WebKit is doing the same thing. Some of our developers only include -webkit prefixes until I complain loud enough for them to throw in the rest: -moz, -o-, -ms.
I hope FF never switches, because then we'd be even more locked in.
So I freaking hope not.
Personally, aside from a few small wrinkles, I prefer the experience of using Firefox over Chrome on Android.
For Opera it is the best move for them. They can now focus the majority of their development time on making the browser great instead of putting a decent chunk of their development time in effectively replicating what WebKit does. I think in the coming year or so Opera will be become a far better browser for it.
As for Mozilla and IE. You would expect Microsoft have more than enough resources to keep working on Trident/Lynx whatever it is called.
For Mozilla is their OS tied to their own engine? I don't know how committed they are to it. For all the releases of Firefox tabs still aren't sandboxed and phpmyadmin often freezes the entire browser when looking at monster tables... perhaps they would benefit from spending more time improving the browser and less time working on rendering.
I wonder what would have happened if Microsoft, Mozilla or Opera had open sourced their browser engine with WebKit. Perhaps we would have seen a split and more competition in this area.
Now it is who has the $$$ to continue to develop their own propriety engine.
Extremely, especially in this context. The reason why Opera is switching (web compatibility issues if you're not the dominant implementation) is exactly the reason why Mozilla would fight a switch with tooth and nail - and remember that unlike Opera they cannot care for profit when doing so.
Here's an extensive reply from a Firefox developer: http://www.quora.com/Mozilla-Firefox/Will-Firefox-ever-drop-...
I wonder what would have happened if Microsoft, Mozilla or Opera had open sourced their browser engine with WebKit.
I have no idea what you mean.
Right now Microsoft has to focus on making Trident catch-up with webkit, and it's still 2 years behind webkit in HTML5 features. Go to html5test.com and see how far IE10 is. It's more behind than Chrome 10 was when IE9 launched 2 years ago.
Mozilla's engine at one point was the go to for a cross platform embedable browser but things changed: https://groups.google.com/forum/#!topic/mozilla.dev.embeddin...
Edit: Should clarify, Mozilla as an entity that releases web browsers, not "Mozilla" as in the Netscape days.
That sounds like Chrome was somehow superior to Safari. Chrome has some nice features that Safari lacks (and vice versa) but from browser engine perspective they are definitely on par.
Why does Firefox do so poorly? ;_; Mozilla is such a good organization (but to be fair can only survive with Google).