You can't do that easily in Java. You need to spend some time understanding the language, the ecosystem, how to bundle your code into an artifact, and then how to release and publish it; the barrier of entry is much higher because there is a lot more to learn and understand.
The article mentions Android - I haven't worked much with Android and so I'm not familiar with the libraries there. Is there the same sort of problem in the Android ecosystem that you see in Node? I wonder if he is talking about the fragmentation of Android implementations (flavors from different providers), which is a different thing entirely. Seeing as Android uses Java, I'm thinking the ecosystem is not like Node.
JavaScript on the other side is a language that only its conceptor could love. All these frameworks are sort of required to be barely productive. Which is why you end up with these big frameworks to abstract the various browsers, introduce databinding, or add static typing.
[0] http://stackoverflow.com/questions/4716503/reading-a-plain-t... [1] http://stackoverflow.com/questions/326390/how-do-i-create-a-... [2] http://stackoverflow.com/questions/3402735/what-is-simplest-...
I don't mean to bucket all java devs as bad, many are great and it's a credit to the language that it is easier to start using than c++.
Plus the open source community includes some Apache kind sub communities in which developers are really persistent and proud to commit to what they are building.
Source: I spent years developing with GWT
I don't agree with this at all. I think a good coder has complete mastery of their code and tools which is only possible by putting significant time to use one thing instead of jumping one thing to the next ever so often.
Imo the best way to become a better developer is not by using what others have created but trying to create libraries and frameworks yourselves to truly understand or at the very least think about the in and outs of the code to think throughly and produce a better architecture. It will open up your eye to differentiate good and bad code rather quickly.
Also collaborating with others becomes much more challenging because you invendted everything.
With experience I can minimize self written code and I can skip plugins and libraries, but I feel better if I can rely on something solid.
It does not apply to all field of programming per se, but web developent is a perfect example.
I have written my own Forth, my own filesystem, my own Mvc etc. etc. and I would say they have improved my skill more than learning a new Algol derived language but I would never advocate their use in production.
That being said, there's practicing your craft and getting shit done. I'm going to be VERY unhappy with a developer that is writing their own take on an MVC framework from scratch for a production project. Unless of course the situation needs it.
IMO the premise of the article is that while this process is productive and educational, it's not something that you want to do all the time. Sooner or later you learn enough to be a world-class professional. At this point, checking what every kiddie with a keyboard and a monitor can code becomes a waste of time.
There's a long way until that point however and in the meantime it's very easy for one to become complacent and thinking they know it all.
In order to be professionals, we the programmers must periodically challenge our notions.
I think we have to learn live with crap code and find ways to surface the diamonds. A rating system for packages would be great. Maybe based on developer ratings, amount of contributors/community interaction, and number of unresolved issues?
Then again, I've never built anything "enterprisey" in Perl (just little tools and scripts here and there), so that could it be it.
The bad part is that when the maintainers make unpopular decisions like disabling all functionality by default and relying on submodules, or not wanting to fix some obviously horrible bug because it's a 'feature', or abandoning the project for months at a time, often there is a cop out mentality that goes along the lines of "well it's my project and I didn't guarantee anything when you decided to use it so why don't you go make your own." There by taking credit for the project's successes but not responsibility for it's failures. So then people do go and write their own implementations and we end up with half a dozen half baked libraries.
An Example: Underscore and lodash are both very good, so are not examples of half-baked, but do we really need both of them? People will just say "well you only have to use one in your app... yada yada yada" but the problem is that if I want to rely on any other npm packages in my app, are they using lodash or underscore? I'll probably just end up with both of them shipping in the bundle because for whatever reason there couldn't just be one popular utility library, there had to be two that do basically the same thing. People will respond to this and say "well why don't you fork those modules that rely on underscore and make them use lodash?" My answer is no. I don't want to fork stuff. I just want to be able to find modules that I can rely on with a reasonable expectation of quality and maintenance. I'll even help maintain if it doesn't seem completely futile.
There may be some advantages of lodash over underscore in comparison to one another but those advantages are minor in comparison to what would be gained by having a single utility library that everyone is on board with.
Don't even get me started with routing libraries for React.
I very much enjoy working on open source, and have implemented some tricky but highly useful features over the past almost 1 1/2 years of my stewardship that users greatly appreciate. However, I also have a rich life outside of development as well, and I believe that each persons' choices with how they choose to use their time should be respected. If you are not paying or contributing your own time feature developing or assisting maintainance, complaining about maintainer absence is really poor. We are not on demand tech support.
In most other language ecosystems, most open source projects tend to be driven by individuals as unpaid side projects. That's great in a certain sense, and a large part of the reason why Java is less "cool" among young people who are eager to plant their own flag on an open source thing. But sadly, it's just really difficult to keep a major side project alive and healthy over the long-term without sponsorship. So those ecosystems tend to be chaotic and flaky.
Why is Polymer on this list?
It would be useful if package management tools facilitated the process of understanding the liability imposed by a package, but the opposite is encouraged. Authors too often put up shiny marketing materials and make bold statements about the utility and vision of their software. It would be refreshing if authors were as open about the flaws, trade-offs, alternatives, etc. but few are.
Perhaps some kind of rating or feedback system that is qualitative in nature would help mitigate the salesmanship? I'd love to know who has been burned by a project and anecdotes about how packages are used by others. Ratings around issue resolutions would also be helpful, how often have we all had major bugs dismissively closed by maintainers?
TL;DR there are ways to develop a comprehensive assessment of a repo, but our tools are lacking and need much improvement in this regard.
If a new version simply incremented the release number, and estimated the risk of issues from the previous version, it would be easier to tell if you wanted to update. Also after a project has a few releases under its belt, you can normalize the risk to other projects risk level (eg: project x always underestimates).
For open source projects, it enables tooling to look at the functions touched and the functions used by your code, and add risk to the update.
An example:
You are 3 versions behind, updates are versions 15/42 (minor bugfix), 16/500 (new minor feature) and 17/32000(major api change). Risk of updating is 42+500+32000. If you peg the amount of risk for a automatic update to 1000, then you would only get the first two.
The same thing in simver: 1.0.1 1.1.0 2.0.0
While its somewhat easy to gate simver, it doesnt lend itself to automated risk assesment as it would be much harder for a tool to tell the difference between a bug fix and new feature (too many bug trackers out there).
I blame Github, in particular its "stars" feature. It makes putting out code a popularity contest.
"Why should I make someone else's project more popular? I'd rather spend my free time making myself popular."
</rant>
If you like someone else's code and it has helped you out in a difficult situation, why won't you star that project ? Sure, you can choose to spend your free time in whatever way you feel like. But I think contributing to other repos, fixing bugs is what the community is all about.
What this article is about? Snippets from SO answers are not as good as battle-tested code? It's obvious and it's same in all languages. You can't find library for any task in tracker you have? Same thing.
It's much harder to ship SO copypasta in say, Java or C#, than in Node. JavaScript is a much easier language to grasp and it is also a lot more forgiving and loose. Combine that with NPM and can suddenly be immensely "productive". But the ability to write and push a lot of code very fast also means that it makes it very easy to write and push a lot of bad code very fast.
No, he envisioned Free Software.
https://www.gnu.org/philosophy/open-source-misses-the-point....
Is the author talking about the fragmentation of libraries and tools for Android, or the general "Android fragmentation"? If it's the former then I haven't experienced it, certainly not to the extent of the Node ecosystem. Yet...
> [...] because languages with smaller communities such as Go [...] don’t yet suffer from it
... I disagree, take for example the tons of different projects trying to "solve" the web framework problem, or the dependency problem, etc.
This is why I'm part of KDE. It's a large, diverse and productive community of people that want to bring Free Software forward. KDE exists 20 years this year and it's still growing and evolving. New developments come in, but they reviewed and nurtured in the community before being released under the KDE flag.
JavaScript could use a community like that where there is a common set of tools and values. The JavaScript that I see out there usually has very little quality control. It's easy to make something that looks nice and does not crash. But scaling up to an application that is complex and stable is hard. I learned this when developing the (now resting) library WebODF. Javascript comes with great tools like JSLint and Closure Compiler, and Jasmine but these are rarely used strictly.
Very few JavaScript developers have read 'JavaScript, the good parts' which is essential reading when writing non-trivial JS.
Node.JS promises the ability to reuse code on the server and the browser, but does not provide a module solution that makes that possible and works with the tools mentioned above.
Competition between KDE, GNOME and others on the Linux desktop exists because there is only one desktop on your computer. Javascript lacks such a focal point and JS framework developers can start new projects because it's easy to have a different half-baked framework in each browser tab.
You cannot build a cathedral out of market stalls. (KDE is the Sagrada Familia in this simile)
That's an interesting example cited by the author considering the behavior he desires isn't standard. There are multiple possible approaches to serializing an object tree to a query string (and reasons why, conceptually, you might not want to do this in the first place). Incidentally, when I searched for this on StackOverflow, the Q&A I found has a top answer that does deal with nested query strings [1].
[1] http://stackoverflow.com/questions/1714786/querystring-encod...
> Recently I needed a library to build query strings, just a small one so that I wouldn’t have to include jQuery just for that. After a couple of hours of research...
jQuery clearly is the best library to look to for a mature implementation. So why not just use it? Hours looking for alternatives is actually really expensive.
The justification is that jQuery is too big.
First It's probably not too big. CDNs and browser caching exist to optimize the problem upstream.
Next, the dynamic nature of JavaScript is to blame. You can't easily extract and compile just the functions you need.
Ideally we could all leverage bits of jQuery instead of poorly rewriting parts of it in the name of minimalism.
Definitely check out rollup, if you haven't heard of it. We (the JavaScript community) are working towards solving this!
This and related comments remind me that the latest ECMA stuff offers a brighter future. The challenge is getting there...
Can/will jQuery work with rollup and all the other new module work? Or does it have to be effectively rewritten?
Seriously. I's been about nine months.
I use the exceptions feature to add in common sites where it is required.
I have been surprised over the course of this experiment by how little it is really needed for casual browsing.
And it has led to a decline in my consumption of garbage Internet by forcing me to take the time to add the exception which leads me to question whether this content is really worth the effort.
I think about building a feature that can do quick, single instance, javascript exceptions.
However, I fear it would undo the good done by the natural filter on my surfing.
And as a bonus, my data usage decreased considerably.
'Recently I needed a library to build query strings, just a small one so that I wouldn’t have to include jQuery just for that. After a couple of hours of research, I had found several candidates'
That right there is the problem, learn your language. You don't need to spend 2 hours looking through other peoples code to solve your (string!) problem. Learn the language and write it yourself, it's not 'rolling your own' when it's something so basic.
Is there a similar project for the javascript world?
DRY only goes so far. If you can do the same thing with a couple of hours of work, it's probably not worth adding an external dependency that you need to track over time.
A fantastic idea. There should be some social consequences for these sort of faithless and feckless types.
But polymer was and still is barely an alpha, and was practically not used for any real usage (which isn't basically a showcase).
Express is 7 years old and still running strong, continuously being updated and was not terminated or abandoned. Kue.js is just an extension of Express.js with new ideas to accommodate the changes underwent in these 7 years in the node.js ecosystem.
To the author: I believe that the node standard library has the query string tools you are looking for.
Contrary to my above points, I'm super in redux and react and feel they were large leaps in design over previous ones. I've been happier and happier with react over the last two years. Redux is also consistently making me smile.
Fragmentation and lost effort are essential for progress. Things that you use today were created because someone decided not to work towards converging efforts.
The query parser you want isn't able to handle nested objects and arrays easily - fork it, add your small piece of code (to what is hopefully an already small module), use it with a direct reference in your package.json, send a pull request upstream.
The fact that libraries don't handle your specific edge case/ special need is not unique to javascript, atleast you have an easy way out in node's mentality of small modules. Try getting a small change to work in spring or hibernate or any of the other massive frameworks that require days only to get familiar with their api and lifecycle hooks.
Consider this, I can easily go into the most popular framework in node (express.js) and change any part of the code within a few hours to match what I want, including adding tests, and there is non-zero chance that my changes will be pushed upstream. If I ever needed to do this to Spring MVC or even Racket, it will take me weeks, would most likely end up breaking dozen of other edge cases and would never be accepted by the Foundations managing those projects.
Js has fragmentation but what do you expect with the largest repo of packages?