Also, I want to add a big proviso to the lesson "Upstream your tooling instead of rolling your own". Historically in ruby, and now even moreso in node, the ease of pushing new packages and the trendiness of the language at times has led to a lot of churn and abandoning of packages. The trick is to include stable dependencies, and that requires quite a bit of experience and tea-leaf reading to do right. Often times maintaining your own stripped down internal lib can be a performance and maintenance cost win over including a larger batteries-included lib that ends up being poorly supported over time. For example, a lot of people got burned by using the state_machine gem that at one time was very hot and actively maintained but went on to get left in an aggressive limbo (https://github.com/pluginaweek/state_machine).
Longer writeup is at https://thomasleecopeland.com/2015/08/06/running-rails-updat... - it's 3+ years old now, but, hey.
I feel like this bit needed more of an explanation about how this applied to GitHub.
If I were to write a post about working in a 10 year old Ruby codebase I'd definitely include "Kill your dependencies" as a bullet point.
Or at least your monkeypatches!
Every piece of externally-maintained code is a security risk, surely? You are implicitly trusting the maintainer of that Gem to not hide bad things in their code. And every Gem that they depend on. If the Gems are old and the maintainer is unpaid and doing other stuff, how sure can you be that they're still vetting all contributions for security? Or that they haven't handed over the maintenance to someone you no longer trust? Or that the maintainer hasn't succumbed to economic pressure and included some malicious code in their Gem?
Or do you have to manually review every single line of code in every dependency yourself? That seems like a lot of work... I would definitely prefer to write my own code for a feature than review 1000's of sloc of someone else's code to spot any problems.
I get that the core Rails codebase gets security-reviewed regularly, but does that happen for Gems? And is it methodical and thorough, or is it just "lots of eyeballs"? And if so, is there a threshold of Gem popularity below which there aren't enough eyeballs to spot problems and the Gem should be considered insecure?
And if you do spot a problem, do you report it and hope the maintainer has time to do something about it? Or do you write a PR and submit it, hoping they accept it? Doesn't that then mean you're maintaining someone else's code base? Again, I would massively prefer to write and maintain my own code than maintain someone's else code (or wait for them to fix a problem that they may no longer care about).
How do you build a secure application for something as trusted as Github while gleefully incorporating all this third-party code?
Rails is now a mature framework and part of the problem is its lack of consideration for large existing codebases running in production. While there are nice tools to help migrate (e.g. rails:update) that hit surface issues, the deep problem is that there are a lot of decisions made going from version to version that are obviously unfriendly to established projects. e.g.: https://github.com/rails/rails/issues/27231
Additionally, there are a lot of gems that are losing momentum, which are near-core to Rails. e.g.: https://github.com/thiagopradi/octopus/issues/490. This is a side effect of the above issue, where the alternatives to Rails are taking a lot of the community away to focus on newer/shiner things. Fortunately, we have companies like GitHub and Shopify that are still very much invested in the success of the ecosystem.
All that said, it's still a great framework to go from 0 to production with a new idea or project.
Other ecosystems we're entrenched in (Node for example) have their share of issues as well, but we won't go into those.
That's just untrue. Almost half of the Rails core team works at GitHub and Shopify which both have huge 10 years old codebases and I can tell you they take breaking changes very seriously.
To be fair - this has gotten a lot better. Upgrading 2.3 -> 3.2 was terrible. 3.2 -> 4.0 was terrible. 4.0 -> 4.1 was rough. Since then, I've found the upgrades pretty easy - to the point I ran rails 5.2.0-rc's in production for a while.
As you fairly note, a big problem is that related gems lose momentum and they don't get updated - which blocks other updates. On the flip side, they usually aren't that hard to update and submit a PR on, either.
Even not having those patches merged quickly is not so bad in ruby - it's easy to tell bundler to look at your fork of a gem on github rather than pulling the upstream.
I think the upgrade path from 2.3 being terrible is generally accepted as being true. But I don't remember any hard problems from 3.x onward.
At least for Rails itself. Gems dependencies are another problem.
Like at one project we had pains every time Rails upgraded their minor version because the previous devs thought using Squeel instead of the builtin ActiveRecord a good idea. Just for being able to write slightly "nicer" queries and now this is a major stopper for going to Rails 5.
People don't upgrade their dependencies across the board, and it's a massive problem for long-term security and maintainability.
Not sure about the upgrade early. It’s a different kind of pain to be one of the first to use a new Rails version vs lagging a couple of months behind.
For example most of our configuration management is setup to just pull in the latest cookbooks from upstream during tests, and as long as all integration tests across all projects succeed, they get uploaded to our chef-server.
People argued that it would be annoying because things would break all the time. And yeah, things break with updates, though the opscode community is remarkably disciplined about semver. But that's what we have tests for.
And honestly: I'd rather deal with one broken update per day than 300 broken updates once per year. One bad update usually requires some nudging and that's it. 300 bad updates at once are a fully blown nightmare and you'll need days just to figure out what is even going on.
Upgrade Early does not mean Upgrade Instantly. I would argue that "lagging a couple of months behind" is upgrading early, because these upgrade horror stories always come out of companies that are years and years behind.
Having some patience and waiting a little, combined with the discipline to not wait too long is part of a mature skillset.
* track master, and notice breakage as soon as possible (you get much smaller deltas when trying to figure out breakage, which is always a big help), and are able to fix it or report it upstream ASAP, or * hold off slightly till the new release has had more bugs found post-release.
I'm talking big packages, less-loader. Htmlwepbackplugin still requires @next for webpack4 to work right.
Looks like the first scheduled milestone was 9.5 (a year ago) and the current is set for 11.4 (next release).
It seems daft to keep seeing this lesson being learned by tech companies, and keep seeing blog posts where most of the pain would have been handled easily by just making upgrading a key feature.
Instead, tech managers and engineers seem to make the same mistakes over and over again, delaying those upgrades, until suddenly they discover it's a hard task to upgrade. I get delaying to _some_ degree, it's better to let other people figure out those sharp bits on the bleeding edge for you, but you need to set an explicit target for upgrading.
At another large tech company I worked for, it took the security team swinging the sledgehammer to get teams to upgrade from known-vulnerable versions of Ruby on Rails. When they came to do it, they discovered the changes were so extreme that the effort involved in migrating was likely more than the effort involved in a complete re-write (they did at least have pretty comprehensive tests)
This is why we call it 'tech debt'.. it is just like any other debt. You take it on because you don't have the current resources to avoid it, and you calculate that it is worth taking it on. But then, you are carrying the interest on it, and if you aren't careful it will grow to be unmanageable, and all your dev effort goes into just paying the interest without paying the principle.
I doubt anyone would enjoy that gig, but it would be a very useful person to have in almost any multi-person team.
That said Microsoft's ASP.NET MVC would be a good contender for your call to suggest a better language.framework.
It is indeed very productive. It has a massive package ecosystem, great documentation, a very long lifespan (not as old as Rails, but probably about 10 years old now). ASP.NET MVC is all over the enterprise so it is well proven, plus StackOverflow if you want a high traffic example - there are probably many more. Where I think it beats Ruby in particular is the C# language is really excellent.
C# is my favourite language for getting things done and I've tried Ruby, Java, Python, Haskell, Basic and Javascript and gone into some depth with all of those. The reason is the excellent language features, one of the first to have async/await, good generics system, Linq is awesome, there is even some dynamic types support if you need it - which is nice for web page stuff while you are experimenting and then can 'shore it up' with a class later on.
The downside of C# I think but I need someone else to confirm is it is probably a bit confusing for a newbie, because of the vast number of features and many ways of doing things because of it's history. Not so bad if you have been doing it for years.
Another downside with ASP.NET is different ways of doing things in .NET Core so lots of relearning to do and tutorial roulette where if the tutorial is using .NET Core you may not find it easy to integrate into your classic application. Although I think RoR would suffer such upgrade issues I am sure.
IMHO there isn't much to gain with static languages when doing web development. A good framework is much more important. Static languages are awesome when coding business rules, though.
The defaults are quite terrible. For example the original developer put everything in the default.py file because there is nothing in the framework that suggests to create multiple controllers or models. The idea of auto generating and auto applying migrations is extremely dangerous IMHO. The ORM is pretty verbose and it pollutes the code with all those db(). On the other side, Django pollutes the code with all those Model.objects so it's not worse than that.
Still, Rails is terrific. If it wasn’t for Rails Java today wouldn’t be as productive either.
I don't have a hate-on for Ruby on Rails; my last job was as a Rails dev and I liked it. I still like, even though I haven't used it in a while (simply because I have no need). It also definitely has some advantages over Go.
But Go offers some great advantages: type safety; still fairly productive; boring understandable Just Works™-kind of language, and has most required components in stdlib (you need a few external dependencies, but not many, although how many depends a bit on your approach as well).
The biggest downside is that there's no standard web framework, and that a lot of Go devs seem to eschew them, too. There are some good reasons for that in some contexts (RoR is not a solution for everything either, that's why Sinatra exists), but it has the effect that a lot of organisations crank out their own internal semi-frameworks. Basically it's Greenspun's tenth rule.
Go-buffalo is probably the closest thing to Go, but I haven't had the opportunity to check it out in depth yet. There is also Revel, but that has some unfortunate design decisions IMHO, and isn't something I would recommend.
Is Go a complete drop-in replacement for all RoR use cases? No; not yet anyway. But I think it is for a lot of cases.
> ... The biggest downside is that there's no standard web framework
I hate to say it, but this why I feel justified in earning 50% higher pay than the younger devs on my team who are more passionate, more ambitious, faster, and put in more hours, yet always want to rewrite the boring web code in whatever is the new cool language of the year.
I mean if you feel it's worth it to stay late and work weekends in order to reinvent the same old boring code to deal with marshaling HTTP to types, process forms render templates and Json, etc - precisely the things that were mature and battle tested in dozens of other frameworks years ago - I guess that's fine if at least you're learning why it's such a bad idea (unless your job is to wrote such a framework instead of actual business requirements).
But your managers and especially your customers could care less that you did so.
So, spending a year and a half on a major version upgrade of your web framework is "productive" how exactly?
I mean, whatever time you think you've saved by using Rails during the initial prototyping phase (and I don't think even that's true), you'll more than pay for in maintenance costs.
Obviously, there's a break-even point there that's different for every framework and every application. But I think the ongoing popularity of web app frameworks suggests that a lot of people find the tradeoff acceptable.
Personally I'm fascinated with Clojure but it doesn't yet have the ecosystem (for me as a Clojure noob) to easily punch out product prototypes. Likewise I've messed with JS frameworks and they're fun but they don't offer me proven established patterns; too many competing and fast-moving choices. I'd rather find my market fit first then worry about scaling / paying off tech debt.
Again, what's your suggested model? To me Rails offers a semi-boring yet productive middleground between high and low ceremony. TAANSTAFL.
2) A year and a half of how much effort?
Interesting, I didn't know Ruby has been around just as long as PHP. I still would choose PHP if my opinion matters, just from my experience with the slow performance of ruby on rails when I gave it a go just a few years back.
PHP - 23 years
Ruby - 23 years
Java - 23 years
JavaScript (nodejs) - 22 years (9 years)
https://en.wikipedia.org/wiki/Dynamic_programming_language#E...
The few languages were web-first were often chastised for not being complete languages.
I’d suggest most frameworks are better than rails for long lifespans. They just frontloaded productivity, and it shows. The UI and functionality of github is largely the same as in 2010.
And then stuff that was sensible then probably shouldn't have changed much.
Also, you're saying that Github hasn't added any new features in the last 18 months?
Consider that 6 years, 2 months, and 20 days passed between Rails 3.2 and Rails 5.2. That's quite a bit of time for the framework to evolve. Then factor in the customizations from several non-framework dependencies and those added by GitHub.
This is an incredible achievement no matter how you slice it.
I could take a big, unmaintained 10 years old Haskell codebase and upgrade it to the newest compiler and libraries in a couple of days, at most (and it would most likely work on the first try after it compiles).
I'm curious to what you think would be a better framework for Github to have used, that would've allowed for easier, speedier point upgrades? Rails likely was a big advantage (as it usually is) in the initial stages. Are you seriously expecting it to be just as smooth when the site experiences exponential user and feature growth? That moving from Rails 3 to 5 was doable, with what sounds like a small team and no massive service disruption, seems like a very strong argument that Rails can still be effective in a company's middle-age years.
That is no easy task, for such a big application.
The upgrade to .NET Core is probably worse than a Rails upgrade though, although it's not really the same thing as .NET Framework will continue to be updated for awhile. Switching to Core is really only necessary if running on Linux servers is a big win for you.
In my experience all large or very large scale apps upgrades (and I've done quite a few) are complicated in a way or another, no matter the stack. Technical debt stacks up in subtle ways (dependencies get obsolete, a specific feature used the framework in unusual ways, stuff can be rewritten with more built-in framework features etc).
I don't see how this article would give Rails bad publicity, personally; I'll add that the advice they provide is pretty much what I would recommend for any tech stack too.
In such a language, any change on framework update would cause compiler errors if the framework's type constraints didn't match what your code expected.
As a result, upgrading in a dependently typed language is simply a matter of fixing compiler errors, and then it's upgraded.
For non-dependently-typed languages that take advantage of the type system, it's still significantly easier, though you probably will have to do a little more than just make sure it compiles.
This is not the case in my experience. I've upgraded pretty decent sized apps (hundreds of models,lines of routes, etc) and in my experience it would take a couple hours a day spread out over a few days a month and then I was done (for versions: 3-4 and 4-5, never done 3-5).
I would say most of the problem is making sure everyone on the team just keeps all functionality as-is. It can be tempting for team members to refactor as they go through but this then becomes a huge time-sync. Anyways, thats my exp on rails but I have no other frameworks to compare it to.
Has anyone migrated a massive app from some PHP Framework like Symfony or from a java framework like Play, or any framework with a large code base?
I have had to upgrade massive systems that were not done with any framework and full of one-off solutions with in-house developed libraries and it was an absolute nightmare, but I'm sure this depends on the language and team. However, in general I think an open-source library used by millions or even hundreds of people is going to have better documentation, bug coverage, support, etc. than something done in house, just IMHO.
So I guess my question would be, what does the alternative look like?
Add to that that a Ruby code base will be significantly smaller than a codebase in most statically typed languages. That means less code to maintain, and probably fewer bugs.
1. https://www.i-programmer.info/news/98-languages/11184-which-...
The "statically typed" languages that you're focusing on (I say probably because they're the ones with high bug counts in the data) are probably C and C++, which have other issues making them higher in bug count. C is hardly even typed. Both have manual memory management.
Also, there's no control for commit frequency. Some people put everything in one commit, while others commit every line change. The Rails Tutorial even recommends the latter.
Lastly, Scala and Haskell killed in this study, as far as raw numbers go. But it doesn't seem significant.
I'll stick with subjective evaluations for now. This is just too hard to measure.
Note, in particular, that there's a high confidence, true, but the claim is "picking language X reduces the chances for bugs by a tiny bit." To quote the abstract:
"It is worth noting that these modest effects arising from language de- sign are overwhelmingly dominated by the process factors such as project size, team size, and commit size. However, we hasten to caution the reader that even these modest effects might quite possibly be due to other, intangible process factors, e.g., the preference of certain personality types for functional, static and strongly typed languages."
Personally I like statically typed languages due to playing nicer with autocompletion and in-editor documentation. Every time people make claims about "upgrades being done when project compiles" I die a little inside.
A thousand times this. It's so much easier to do breaking changes and refactors in a language that's supporting you, instead of working against you.
It also doesn't help when most codebases are using ActiveRecord or something in every complex class and wind up increasing the interface width and ancestor depth of their code significantly. The point is - I think the language does a pretty good job supporting the developers, but there are a lot of bad practices that are still in use and recommended. Can't fault the language because people are writing shit code
Can you name a framework where upgrading a very large (several hundred thousand LOC or more) application across 6+ years, two major versions, and multiple minor versions is not a significant undertaking?
FWIW at my former employer we had a huge Rails monolith with something like 500K+ lines of code. On top of that, our genius architects had split it up into a very nonstandard Rails architecture.
We hired these folks (no affiliation, other than that I used to work for a company that hired them) and they did a solid job. They blogged fairly extensively about each incremental upgrade and the problems they encountered:
https://www.ombulabs.com/blog/tags/upgrades
Do you see anything there that's much more painful than a similarly ambitious upgrade in other frameworks?
Also you're measuring effort in "X number of months" but as the article states it started as a hobby side-project for a few engineers. There is no notion of how much effort it actually represented. Heck I could need 5 years to upgrade from angular 1 to angular 2 if I put in 30 secs per day...
I would actually advocate for a framework that's past its prime/hype period over any newly untested hyped framework any day.
When moving from 4 to 5, we relied on simple smoke tests and unit tests, and had no major issues or bugs. The biggest effort was to make sure all of the application and environment configurations were up to date and using all the new settings introduced etc.
My very subjective opinion is that either most of these code bases are low on quality (meaning they are harder to maintain in general), too tightly coupled with Rails itself (models stuffed full of logic, instead of using plain ruby objects for logic and keeping ActiveRecord for persistence level logic), or engineers are just too scared to make changes to the codebase - which again is perhaps a combination of bad test coverage and bad code quality.
Either way the stories of upgrading major versions being a huge undertaking always make me scratch my head and wonder what are we doing wrong if its easy for us.
And inb4 someone claims our apps are just small and simple - we run about 12 Rails applications in production in various sizes, about half of them being relatively large.
I've been using Rails since late 2005, and in my last job upgrades a few Rails apps that haven't been touched since 2008 or so.
(And time under the "took the opportunity to clean up technical debt" heading shouldn't really count.)
1. You're citing a specific anecdote (some people... java1.5) and trying to generalize. What matters is not some people, but the average case, which "some people" will not tell you.
2. "easy to upgrade" is not being argued; "easier in general" is. Just because it's easier to upgrade in a statically typed language doesn't make it easy, just easier than for a dynamically typed one.
In effect, you're saying "there are people using statically typed languages who didn't update, so it must not be easy to update".
A statement that makes a similar fallacious jump is: "There are some people who still type slowly on computers so I can't see how anyone could claim typing on computers is generally faster than typing on typewriters".
Anyway, the fact that the compiler catches more errors at compile-time means it should be obvious that it's easier to upgrade a statically typed language.
If I have a method in ruby "user.get_id" which used to return an int, but now returns a uuid in a new version of the framework, for a statically typed language my code just won't compile on the new framework until I handle that, regardless of test coverage... where-as in ruby, I'll need to have test coverage of that path or read the upgrade notes or something.
There are valid arguments to be had about dynamic vs static typing, but whether it's safer/easier to perform an upgrade of a library/framework is not an argument that dynamic typing can win easily.
If you think someone is using HN abusively, you should email us at hn@ycombinator.com so we can investigate. Attacking them in the comments is not cool, and being personally nasty is of course a bannable offense.
If you'd please review https://news.ycombinator.com/newsguidelines.html and follow the rules when posting here, we'd be grateful.
How long was AWS Lambda not able to support Python 3?
How long was GCP not able to support Java 7?
How hard is it to upgrade any core framework or language?
So back in 2012 rails had a default behavior where you could mass assign values from a POST to a user and there wasn't any scrubbing of that, by default. Someone realized this was a Bad Idea and issued a pull request that would have fixed it. Instead of accepting the PR, DHH (I think it was him) said something along the lines of 'competent programmers would not leave that setting in place' and rejected the PR.
The exploit discoverer thought about this and tried it against github, which was known to run on rails and the code worked! From there he was able to manipulate the permissions on github to get access to the rails repo where he reopened and accepted his own pull request.
He was promptly banned.
Did GH just rewrite those scopes in their respective models and maintain a ton of if/else blocks for the different versions? And if so, didn’t they run into issues without the code not being DRY, e.g. someone fixes a 3.2 query, but not the corresponding 5.2 version?
I think in general, there are lots of reasons to like a language outside of its runtime performance.
I love working with Go and Rust due to their performance. Any I work every day in C#, which ends up nice and quick, too.
But I still love Ruby due to its expressiveness, and the way it works just seems to align with the way I think. But that's poetically because I used Smalltalk in the past and I like the bits of it that Ruby borrowed. :)
To answer to original question, though. I'd say it's irrational to hate languages that are slower than necessary because it's irrational to hate a programming language at all. No matter what language it is, it's just a bunch of words on a screen. Use the ones you like and don't waste any brain cycles thinking about the ones you don't.
Unless you're locked in a cube farm and forced to write Cobol at gunpoint all day. Hate might be rational then.
Is it rational to love the fastest languages? Do you rank your programming language love by an arbitrary speed index?
I’m not sure why this still surprises me. For a company the size of Github, there should most certainly be a team responsible for these type of upgrades.
And perhaps they have little to gain and possibly much to lose if they ditch it?
You didn't say much in your question, so I don't know if you feel they ought to rewrite with a popular SPA framework or use something like Elixir Phoenix, but if their Rails-based solution handily serves 30 million users, why do you feel so strongly they should move to something else?
If Github wanted to integrate a lot of real-time features, then Elixir + Phoenix can't be beat. Depending on what they replace, a 10x in performance and a fraction of the servers needed is a nice win.
Thanks :)
I can't really take that seriously.