Just look at Blogger...their client-side rendering is annoying as all get out. It's just a blog post, render it server side and give me the content, then sprinkle on some gracefully degrading JS on top to spice it up.
I say this as a huge proponent of Angular who uses it for all his web app projects who also wouldn't ever use it on a public facing application.
URLs which can be stored and shared and are idempotent
Mostly stateless operation for anonymous use (fast to serve/load/cache)
Document formats that anything (including dumb crawlers and future gadgets) can parse and reuse in unexpected ways
What you call suboptimal browsing devices are what makes the web special and distinct from native apps. These are not trivial advantages, and most websites would benefit from these strengths of the web, even if they are an app.
As an example of where something like a single page app can shine on a public site, I've seen chat software which used it which worked really well (using socket.io I think), but only because people didn't care about sharing individual messages and the chat was ephemeral.
If you use a decent router, you get shareable idempotent URLs: https://solvers.io/projects/7GTeCKo7rGx5FsGkB
> Document formats that anything (including dumb crawlers and future gadgets) can parse and reuse in unexpected ways
As in the article, you can use phantomjs to serve up static HTML to crawlers. They are correct in that it does slow you down and add complexity.
The main problem I think is that SPA tech is still immature and getting all the moving parts to build a public facing SPA working together is a time sink.
This interface style does not require any visual page refreshes to load new content, but it also still can support routing and deep-linking.
Most websites shouldnt be SPAs. One can still use Angular to code widgets in a regular page based site,without using a js router.
It's just that devs are getting lazy ,they throw a restfull server app quickly then dont want to deal with any view layer on the server and do everything in client-side js. For some projects it make little sense.
Knowing which one is building can greatly inform choice of framework.
Basically, SPA frameworks are useful when you are working with lots and lots of data moving back and forth. A good example is something like Intercom.io's interface. They have tons of tables and modals and data flying around. This isn't conducive to the standard browser request -> server render -> load whole new page on the client. It's just too slow. When you're interacting purely with data in a master interface, SPA frameworks are the way to go. And it isn't even a matter of literal page loading and rendering speed, it's the fact that refreshing the view with a whole new page on each link click is a context change that adds up when you're managing a lot of data or performing a lot of small tasks in an application.
But something like Blogger, where you're reading just some content, maybe some comments...there's no real benefit from loading it in a SPA environment. Render it server side, cache it, and fire it to my browser as fast as possible.
At Thinkful (http://www.thinkful.com/) we're building our student / education app in Angular, and are moving all browser-side code to Angular as well – both public and private.
In a lot of our splash or SEO-enabled content we're not making use of all of angular's features, but the upside of using it is that we have a single, unified codebase in which we can share libraries, team knowledge and design patterns. Simply put: Using Angular everywhere allows us to keep DRY. Testing the front-end using purely Angular is yet another core asset at Thinkful.
One framework for writing code and testing is much better than a hybrid of server-side rendering and Angular.
Our biggest challenge was SEO, but this was reasonably easily solved with using BromBone (http://www.brombone.com/).
There are reasons to stick with non-angular or JS frameworks, so it's not always a slam-dunk. For example, if Thinkful had millions of SEO pages that we needed to load as fast as humanly possible Angular would be a bit much... But that's not what we're optimizing for: We're building a phenomenal user-experience that we can support long-term, is well tested, can have a lot of developers use, and can have non-developers do their job inside our codebase (everyone at Thinkful codes).
For all this and more Angular has proven a great choice for both logged-in AND public sites.
Some people use screen readers, text-mode browsers, IE due to stupid work/school policies, etc. Some people like automating their workflow, which can involve scripted browser interactions. Some people actually care about security and privacy, and so run NoScript, etc.
Here's a link with all the nginx config you need to make it work: https://phantomjscloud.com/site/docs.html#advanced_seo
We also had another team from California working on this project, who consistently insisted that we go with Angular for a project of such complexity. Back then, on HN, everybody was writing about how awesome Angular is, and how you must use it for your next project and so on. It reminded me of the early MongoDB days here.
I was under constant pressure from my client too, since he was also reading a lot about Angular everywhere and the Californian company had him almost brainwashed in favor of angular. After already falling trap for the MongoDB buzz (I used MongoDB for the wrong use-case and suffered the consequences), I decided to carefully evaluate the decision to go with Angular for the project.
After about 6 months of using Angular for a different medium-scale project, I decided against it for my client. I realized that Angular is the all powerful Battle Tank. It can do everything you want it to. But it's very tempting to choose a battle tank when all you need is a sedan to get you from home to office.
Angular has it's own use-cases, but for the most part what I observed was that you could get a lot of mileage without using Angular, with just plain Jquery+Knockout (or any other similar framework of your choice) for most of the front-end.
In a simple calculation that I made (to pitch to my client), I estimated about easily 25% of time (and thus money) savings by not going with Angular for our project. (YMMV)
Usually I tend not to open my mouth about/against angular here because most HNers seem to like Angular a lot and they downvote without a reason just for having a different opinion. But, I am really glad someone wrote a blog post about this.
Also I don't see how jquery + knockout is so different from taking the angular/ember road ? (edit: I see why from a legacy point of vue)
Finally the comparison with mongodb is misleading : angular & cie brings your code from procedural mess to a well known structured area, whereas moving away from sql didn't improve any of your architecture it just proposed another one
That said, I still go knockout for this sort of thing most of the time because their support for legacy browsers wins out, and in the industries I do work for old versions of IE are sadly common.
I find Angular really helps when you have a fairly complex single-page app that has non-trivial interactions. The complexity tradeoff you make using it is not worth it when you don't have those needs, especially when you have a team of people that need to be up to speed working on it.
Knockout's simplicity is hard to beat.
Sometimes certain libraries 'make sense' to me. Sometimes other people on a project choose a library because it makes sense to them.
I like that you tried to make it clear to your client by putting it in the time/money perspective.
Knockout is not commonplace.
They're just alternatives. One is Microsoft's, one Google's. They're both client-side frameworks.
Your post makes no sense at all. All it sounds like is that you're familiar with knockout but not angular and managed to convince the client to use something that everyone else wasn't familiar with, only you.
In fact this post is all about not using client side frameworks like angular. The arguments against it are the same as the one's you'd use against knockout.js. So your post makes even less sense.
Knockout is a common data binding library in the JS world. It does one thing and only one thing very well - databinding. Sure it might have (little) features here and there that allow you to do other things, but it's core feature is data binding.
>They're just alternatives. One is Microsoft's, one Google's. They're both client-side frameworks.
No, they're not. Knockout a is a data binding library. One of several Angular's offerings is data binding.
>..and managed to convince the client to use something that everyone else wasn't familiar with, only you.
That's an assumption. I never said that nobody knew it except me. Sorry if it wasn't clear from my original post, but everyone in my company knows all the major frameworks - Angular, Knockout, Ember, etc. etc. We never get religious over this stuff and always use what's best for the project in hand.
>The arguments against it are the same as the one's you'd use against knockoutjs.
You completely missed my point - Angular is X+Y+Z, my suggestion is to use a framework for X, if you need mostly just X for your project. Replace X with any framework you want, including Knockout. It does not make sense to use a framework that offers X+Y+Z when you need just X or Y. That's my point. I'm sorry you feel offended.
Just to be clear - I'm not advocating any framework by name, including knockout in particular, I just mentioned it because I was documenting my use-case in my original post. Use what suits the best for your project and not because you read about it on HN/Slashdot/etc.
Please calm down your tone and don't get religious about this stuff.
Cheers.
Twitter learned it[1].
Lots of us learned it when we were experimenting as Web 2.0 was being born. Things were far more obvious, far more quickly then, as bandwidth and resources weren't anywhere near what they are today. Back then, we quickly realized that just a couple of delayed asynchronous calls could cause your app to slow to a halt and feel sluggish.
That's not to say it can't be done[2], it's just to say that, thus far for the most part, folks end up discovering reasons why they didn't "do it right" too late over and over. I could be wrong, but I feel like there's been a few posts to Hacker News within the past couple months with similar sentiment.
When people start suggesting client-side rendering, I usually ask something along these lines:
Why on earth would you leave something as simple as textual document model creation up to the client's 5 year old machine that is busy playing a movie, a song, downloading a torrent, doing a Skype call, and running 15 other tabs, when your server is sitting around twiddling it's thumbs with it's 8 cores, scalable, SSD and memory-heavy architecture?
[1] - https://blog.twitter.com/2012/improving-performance-on-twitt...
[2] - http://www.quora.com/Web-Development/What-are-the-tradeoffs-...
Now, do people really think that way when they adopt these frameworks? Nope. I mean, they might think about speed, but we all know that loading a bit of static HTML and CSS is faster than any JavaScript execution.
That said, I'll ignore my point above and get a bit technical here: Unless you're using Opera Mini, client-side rendering is indeed all we have for "textual document model rendering". That's what we call "HTML" folks when we're not "viewing source". So ... I'd give the client-side a bit of credit here, things will improve with time.
Use the right technology for the job. And that advice keeps changing. Right now, I'm most influenced by http://www.igvita.com/slides/2013/breaking-1s-mobile-barrier... but once you've caching/native, it's a whole different game. And if you add pre-fetching...
> Now, do people really think that way when they adopt these frameworks? Nope. I mean, they might think about speed, but we all know that loading a bit of static HTML and CSS is faster than any JavaScript execution.
You sort of gathered the problem up into a nutshell. People aren't thinking of separating the need for server communication all the way through. They aren't thinking the right way when they adopt the frameworks. They aren't thinking about what they don't know. That's okay, they can't. But, continuing to push the idea that they can is not helping anyone.
Not only is this affecting the actual performance of the app, it's affecting analyzing it, testing it, making it properly available to SEO, and probably other things not illustrated in this common revelation of an article. This isn't just one problem, it's a slew of problems that get so bad that ultimately the entire system needs to be rewritten. If it was just slowness, that'd be one thing, but it isn't.
> That said, I'll ignore my point above and get a bit technical here: Unless you're using Opera Mini, client-side rendering is indeed all we have for "textual document model rendering". That's what we call "HTML" folks when we're not "viewing source". So ... I'd give the client-side a bit of credit here, things will improve with time.
You're right, I should have said "textual document model generation". I've edited my comment as such. The act of rendering the model isn't normally the problem. The problem is that these frameworks rely on the client to turn their representation of the model into something else. They're converting something the browser doesn't natively understand into something the browser does understand, then using the browser to run a whole bunch of commands to generate representations of objects that can then be displayed on the screen.
Wouldn't it be nice if you could've skipped all that and just delivered clear instructions on how to render the information from the get-go?
Listen, SOAP is ugly. We all hate using it. But it's there and it's a standard for shit that matters because humans are fallible. We're not good at knowing what we don't know or how things may change. Every time a shortcut is taken, something else will need to be done down the path to ensure stability of the system at a later date. Often times the cost of that is human intervention.
Developers and big companies discover over and over why the nice and easy:
"Throw up a REST bro, JS that shit in Chrome, and get a back end as a service for the rest. We don't even need to worry about the fact that the server and client are different anymore! We can CODE ALL THE THINGS in one spot and not have to learn anything further! Isn't that great? Can we get our VC money now?"
...isn't sustainable.
Again, I'm not saying that it can't be done. I'm saying that it's really hard, you have to pay very close attention, and you need to know a lot up front.
Otherwise you get stuck trying to fix things you didn't know you didn't know. And you write another one of these blog posts.
Semantic nitpicking. It's obvious that the grandparent speaks about templating, which can be done both on client and server side.
Honestly, I'm really tired of people who pretend there is no difference between serving up HTML and serving a program that constructs that HTML. The difference is that in the second case you cannot get the content without executing the program written by someone else with all the relevant implications.
Also, people often miss another important fact: server-side rendering can be cached and shared across clients. Client-side templates must to be executed by every client separately.
Additionally I have to say Angular (or any client side framework) seems a poor choice for a consumer facing content driven site. Apps are for actively doing something - not passively reading. Of am I missing the point
And I'm not sure I'd say passively reading is something we ever do on the web. Consider nytimes.com redesign -- it uses app concepts for a sidebar while devoting all attention on the prose in front of you. You can even navigate using arrow keys, though that could be improved: first time I do it, use a popup to let me know what happened and how to undo. The point is, the app-ification of the web is upon us, we just have to find language and frameworks that will best support it. Both client-side and server-side are necessary at points.
Obviously SPAs take a lot of extra work to make search engine friendly and are probably going to be the wrong tool for the job for any site which requires that. Much of the web isn't searchable and doesn't want to be searchable. If you are writing a web app to solve some business problem which sits behind a login angular really isn't a problem.
Think of the millions of poorly maintained and inflexible VB and Java business apps out there that are due to be replaced and the employees who are wanting to do business on the road with their macbooks, chromebooks and tablets. There is your market for Angular.
Most articles are so optimistic (because it is new, cool, make fun), that is hard to understand if the "tool" is the right one or not, you see it when you use it.
So i am glad to see when people / companies write about their expierence with the "new" technologies. Everybody can then verify if the tool is the right tool for a project/problem or not.
E. g. you write "If you are writing a web app to solve some business problem which sits behind a login angular really isn't a problem". When somebody read this, this person thinks "cool, angular is the right tool for a login backend application".
Slow... depends on what you're used to; I've worked with Java / Maven and such, and one step worse, Scala; if you want slow, go for those.
Complex. The author links to a certain gruntfile[0] as an example of a large, unmaintainable gruntfile, but apparently people forget that a gruntfile is just a Javascript / NodeJS file, and thus can be broken up into more manageable chunks - like any code[1]. Alternatively, there's newer, less config, more code build tools like Gulp.js[2].
#4 is also no longer valid; Angular's Protractor[3] wraps around Selenium etc and deals with angular's asynchronous behaviour, as long as you stay within angular's framework.
And #5 is to be blamed on the developer for not having attention to performance / total load times, not the framework.
I'm defensive, but then, I don't have a public-facing app.
[0] https://github.com/ngbp/ngbp/blob/v0.3.1-release/Gruntfile.j... [1] http://www.thomasboyt.com/2013/09/01/maintainable-grunt.html [2] http://gulpjs.com/ [3] https://github.com/angular/protractor
The idea is to keep a lot of the advantages of the traditional web development model, but, via HTML5-style attributes, RESTful URL design and partial driven UX, achieve a better UX.
It's not for everyone or for every problem, and it is still in pre-alpha (we are going to change from a preamble to HTTP headers for meta-directives, for example) but, if you find Angular too heavy-weight and foreign for your UI, it might be of interest.
Please contact me if you are interested in contributing.
In my opinion, a setup like this is close to what the next big wave of frameworks will use.
You can break your layout up into parts and have a site that is partially dynamic and partially static. You just pass the html that react renders to your templating engine.
Getting everything setup correctly can be a little hassle, but gulp is fast enough when doing a watch on the compilation step. Of course, because everything is javascript you share the exact same component code between client and server.
This is a good example that helped me a bit[2]
[1] http://facebook.github.io/react/ [2] https://github.com/mhart/react-server-example
TBH, a lot of sites really overdo the client-side rendering thing.
> you can pre-render the initial state of your app
This, I think, is the killer feature of Node, and the reason I'm slowly transitioning from Python for new web projects. You can reuse your server-side templates client-side (without worrying about, say, reimplementing Handlebars Helpers in your server-side language), and can easily render full HTML templates for the client that get enhanced when the client-side JS loads. This also solves UI nuisances -- like your server's markdown renderer being different to your client-side preview (grr).Meteor and Derby are obviously heading down this path, and while I'm not sold on the rest of Node and the general JS style, having the same language in the browser and the server is too much to pass up.
For example, how often do your users go to their settings page? Does that need to be part of the SPA? Have a complex Settings pages that's composed of 5 or six tabs and 20 different user interactions? Maybe the settings page is itself it's own mini-SPA
How does a user flow through your app, do they really need every screen bundled under a single SPA?
Routing issues, complexity, code dependencies, etc...are all good reasons to not make one monolithic application, even if it's behind a login.
Likely your SPA should really be an app composed of a bunch of smaller SPAs. You search functionality...mini app, your separate workflows...a mini app, your timeline...mini app. history view...mini app, etc...
Breaking your app down into a bunch of smaller SPAs has a lot of advantages and implicit modularization, as well as productivity gains when working on bigger projects with bigger teams.
In general for a Web apps, you won't go wrong using different pages as a module system. It's proven and when your app gets big enough, you don't necessarily have to worry about a huge up-front download.
BTW, if you develp web apps to be shimmed into native apps, like PhoneGap, or something, then definitely look into the routing aspects of these libraries.
However...
“You can separate your dev and production build pipelines to improve dev speed, but that’s going to bite you later on.”
In my experience, you must separate dev and prod pipelines. It has never bitten me because I make hundreds dev (local) and dozens kinda-prod (staging server) builds a day.
For dev builds, Grunt just compiles LESS but doesn't touch the scripts so there is literally no delay there. In dev environment, we load scripts via RequireJS so there is no need to maintain a properly sorted list of scripts too.
For production, we concat them with grunt-contrib-requirejs with `almond: true` so RequireJS itself is stripped out completely. Production build takes time (10 to 15 seconds; we're also running Closure Compiler), but it's never a problem.
Even adding JSX (Facebook React) compilation didn't cause a slowdown for dev builds because grunt-contrib-watches compiles them on change and puts into a mounted directory.
I made it a rule to use the square bracket notation for angular DI and that obviously takes care of any minification issues.
2. Flaky stats and monitoring Use event driven metrics from your api and or client side. Track everything in the sense of user, controller, action, params. Blacklist sensitive data. Derive metrics with funnels, user did x actions, returned and subscribed. Conversion! It's all there just understand your events.
3. Slow, complex build tools. Your not limited to grunt, or node. For example we use rails and use our own buildscripts and generators to build fullstack angular apps. Easy Breezy.
4. Slow, flaky tests There is room for improvement. But jasmine and phantom can get the job done. But let's not forget were also testing our api. Use your goto testing framework and let jasmine phantomn do the client frontend testing.
5. Slowness is swept under the rug, not addressed Precompile your angular templates, only wait for api responses. Don't fragment your page load into seperate request. Resolve all the requires data beforehand in the route provider.
You talk about these server-side webkit parsers as tricks that “slow things down,” which indicates that you at least ultimately got them working. I never got that far.
You can do that, however if you dump static page to Google bot IP only it's an SEO cheating.
It type checks like haskell and allows code sharing between serverside and clientside of the app. This means i can use code to generate a complete HTML site (for SEO purposes) when the URL is hit directly and modify the DOM from there once the app is loaded... with the same code!
Obviously this is code sharing is mostly interesting to app written in haskell. But I'm so excited about it that i had to share... :)
G'luck! The "javascript problem" (try google for that) is a hard one.
[edit] i call it "playing with Fay", but im certain this will end up on production for me.
I agree with is the first but only if you're still in the days of SEO trolling. Frankly it's just not as important if you're doing your other marketing aspects right.
For #2, I think there are plenty of ways to build in analytics. We use angularlytics and it works pretty well. Took me like 5 minutes to setup.
#3 - Yeoman. Generator Angular. Here's how I do it:
1. Make a client dir, and yeoman up your project with generator angular. 2. Make a server dir and setup an express server. 3. Grunt serve your client dir 4. Make it so express watches your .tmp and app folders for local dev 5. Run your express server 6. When your ready to serve it, Grunt build to a dist folder in your server folder 7. For production, have express serve the dist folder
Yeah kind of dirty (since you're running two local servers for dev), but hell, it's fast as can be to setup and a pleasure.
#4 Tests? If you're doing tests of any sort, they're bound to slow you down to an extent.
#5 Isn't this applicable to all web apps? Mistakes and mismanagement of loading resources is a problem for anything.
Sure it has it's problems, but there's just far too much productivity to be gained from using it. For example, Ajax animations are beyond time saving.
The real problem with angular is the terrible docs ;)
It's like creating an online store and deciding to choose MongoDB or any other NoSQL branded database and then discover it doesn't support transactions and having to move over to a RDBMS like MySQL or PostgreSQL. The caveats listed in the article are definitely true though. As someone who's used AngularJS enough to know its downfalls, it's definitely not a one sized fits all solution and much like anything it comes with both its own pros and cons.
It's important you spend the extra amount of time when planning your project to ensure you choose the right tools for the right job (well at least at the time). If your requirement is to be indexable via search engines, choose a solution that allows that and so on. Don't use something just because it's the flavour of the day on the HN front-page.
> 1. Bad search ranking and Twitter/Facebook previews
This problem is patently obvious to the most cursory examination of single-page applications. If SEO is important, and you want to do an SPA, then you must be willing to bear the cost of addressing HTML requests. For my startup, I wanted to keep things DRY, which lead me early on to the Nustache engine for ASP.NET, allowing me to use the same Mustache templates on server and client. This doesn't have anything like the complexity described in the article.
> 2. Flaky stats and monitoring
Simply not true. Using Google Analytics and Backbone, you simply listen to the Backbone.history:route event and fire off a pageview using the Google Analytics API.
> 3. Slow, complex build tools
Complex, yes. Slow? Using r.js, no slower than a typical static language build.
> 4. Slow, flaky tests
Slow, yes, but no more so than desktop app test automation. I've found PhantomJS with QUnit (unit-testing), and CasperJS for integration testing to be quite reliable. It took a few days to get everything connected (scripting IIS Express to start and end in the background being the trickiest bit), but that was it.
> 5. Slowness is swept under the rug, not addressed
This is a UX challenge that is known and obvious up-front. Failing to address it is a design problem, not a technological one.
Overall, this seems the result of the ineptitude prevalent in inexperienced, "move fast, break things" teams. Rather than owning up to moving too fast and foregoing due analysis/research, they blame technology. Or, the article is a marketing ploy.
Substitute any other flavor and the same problems exist.
(to give scope size, we're replacing a several hundred screen Adobe Flex app with our new KO/JQuery app)
To anyone reading, you really should understand your workload before picking tools. And, you need to understand the difference between Web Application vs. Web Site: Which are you building?
Server-side rending is the winner for content sites (as mentioned by the author). Beyond initial rendering, a server-side solution allows for more caching. Depending on the site you could even push a good amount of file delivery to a CDN. In the end the author switched to Go, but Node.js + Express, RoR, PHP, Java with Play, etc. would all work just as well.
Next, are you CPU bound or network bound or I/O bound. If you're writing an application that requires heavy calculations that utilize massive amounts of CPU, then pick the appropriate framework (i.e. not Node). If you are I/O bound then Node may be a great solution.
Client-side rending (such as Angular/Backbone/etc) really shine when you need a web application (not web site). These frameworks are best when the application code is significant relative to the data such that many small JSON requests deliver better overall performance. Think of a traditional desktop application or native mobile app where the application code is in MB, but the amount of data required per request is in bytes. The same logic applies to web apps.
A few areas where problems such as what the author experienced emerged from blanked statements about technologies:
1. Gulp vs. Grunt: I use Grunt. I may switch to Gulp. But seriously, which one is "more complex", "faster", can be quantified. Lots of people pick the wrong technology because the web is littered with echo'd opinion statements. Exchange "more complex" for project A has a config file with X number of lines, while project B has a configuration of Y number of lines for the same task. Or project A uses JSON for its configuration while project B uses YAML.
2. "Or we could have used a different framework) - with a link to Meteor" - No please do NOT use Meteor for your site. I love Meteor and want it to succeed, but it is not the optimal choice for a content heavy site where each user views a large amount of data. As mentioned above, use a server-side rendering solution (like you did with Go), then cache, then push to a CDN. Problem solved. Meteor is awesome and is a great real-time framework. Use it when you need real-time capabilities...but not for a content heavy, static site.
> but they just weren’t the right tools for our site.
This could have been the title or first sentence and would have delivered 100% of the message if the reader read no further.
A lot of these articles about why we changed from technology A to B could be much improved if the original decision making was documented (not just the switch). As in we picked project A because we thought it would deliver A, B and C benefits based on our applications required capabilities. However, our application really needed capabilities M, N and O, which project A was not a good fit for. So, we switched to project B and experienced the following improvements. Therefore, it can be concluded that if your application needs M, N and O then project B will be a better fit.
This, 1000x over. I have static landing pages and about pages for search engines, but the app itself is a single page Angular app. The data does not have to be indexed.
Though Google is cheating here, of course. They use plenty of JS frameworks to serve content, yet those Google+ posts do show up in my search results. Though every G+ post does have its own URL, so I guess that's the way to do it.
Is anyone aware of a solution to allow clients to validate a version of an SPA website (cryptographically), in the sense that once downloaded a signature is checked and then further visit, if they require an update, have to be validated and verified by the user?
I'm thinking of a way to allow user to trust their applications in the same way you would trust a dist-upgrade on Debian via the packager's PGP signature and chain of trust.
This would solve the current problem that sites can change user side code at will anytime without them knowing and thus making it quite impossible to develop proper security solutions where the user actually owns and is responsible for his own security.
With such a solution in place, we might start seeing proper p2p/webrtc security related apps, we could even imagine an in Browser (read js) Tor-like service...
I love AngularJS for internal desktop tools I write, but I would never use it or any other client rendering script in the wild where my applications could be consumed by unknown form factors. Specifically, you have absolutely no idea how much memory allocation you are getting when you are dealing with mobile devices, and any assumption on the developer's part is asinine.
AngularJS was not the problem in this case; and I'd wager we are going to continue to see articles like this as developers go through growing pains of learning that you should optimize for the end user first, not yourself.
In the long-term I'd love to see a web framework that uses react on the server-side, kind of like how rendr uses backbone on the server-side [2]. Seems to make sense because react works against a virtual DOM, so it would allow you to avoid the hacky ways of working with an actual DOM in node.
1: https://github.com/prerender/prerender 2: https://github.com/airbnb/rendr
I'm building a big AngularJS app and I'm not using any build tools. Apart from minifying, what would you use it for?
Server side generated stuff would've been just great here or on the project I did!
This is a joke right, asyc loading is somehow bad? If it's that much of a problem hold off rendering untill you have all your data back, of heaven forbid implement item one of Nielsen's list of heuristics, "Visibility of system status" and chuck in a loading gif.
And if you want to deliver your content in mobile devices with there native app then Client Side Framework will be handy for you.
Content site is not app. Single-page app frameworks are for apps, not for content sites.
Look at the site: https://sourcegraph.com/github.com/tomchristie/django-rest-f...?
Is that the best interface you can get? In 2014 everything I click reloads the page? a tabbed interface that doesn't load the content in the window is noticeable by users these days. No pop ups of any kind? why do I need to reload the page to see a list of 4 contributers. You've gained maintainability at the cost of user experience, a lot of user experience for very little maintainability.
There are sites that benefit little from client-side rendering- blogs and news site for instance, but most will gain a lot.
1) Indexing with PhantomJs is a breeze, truely. Not only are there a ton of libraries that already do it, there are even SAASes that will do it for you for a fee. If you are really unable to come to terms with this, you can use react.js, which solve the SEO indexing issue completely.
2) If the only thing that you are doing on the site is measuring page loads then your site either lacks interactiveness completely or you aren't measuring everything you should. You aren't measuring to where a user left your site (and incredibly important metric) or any action he does (assuming there is any he can do) that isn't navigation.
With Angularytics (and a thousand other libraries) adding analytics is maybe 5 lines of code, and you get declarative analytics on any link you want.
3) This site's js is neither minimized nor concatenated, so I'm not sure what build tools you need for angular either then the ability to serve static content? But in any case it's js, you are going to need to minimize and concatenate it at some point for performance, doesn't matter if you use a fat client or some custom jQuery plugin. Even with Grunt, though I don't like it very much, the build file is maybe 10 lines long, and the build process takes miliseconds.
4) And the alternative is what? using manual QA on every build? You have a website with even minimal interactivity you are going to need to use a browser based testing solution. Karma is a breeze, and with the new setup, the only thing you need to install is node and karma. Takes exactly 3 seconds, and you get one of the best isolated unit testing framework for client side code. Angular is actually built around the ability to unit test it.
5) So your saying that the solution to slowness is to have 43 unique resources loaded and rendered on every navigation? Page reload slowness is one of the major hassles that Ajax, and fat clients as a consequence, are trying to overcome. Your site takes, to me, about 3 seconds to load from page to page (6 seconds to finish rendering), there is obviously no wait time indicator that you can add and no tricks to minimize this. Not to mention that rendering is slow, and server-side rendering is not only extremely slow it can also cause parallel load which will make things worse. If you don't care about your speed it doesn't matter what framework you use.
For the sake of this you are losing interactiveness, speed and lower bandwidth to name just a few.
And even then, there are solutions to do PhantomJS rendering on the fly, it might be a bit slower, but shouldn't be drastically so - some of the SAAS solutions I mentioned already provide such an option.
Our javascript tests suites are always much, much faster than our server-side test suites with similar coverage.
Most approaches I've seen use `forceUpdate` although it is arguably more React-ish to [pass along pure JSON][1].
We're currently sticking with passing JSON top-down and calling `renderComponent` when model changes so `props` never mutate.
What is your experience with this?