People without experience often look at guru's because they can not yet think for themselves.
I'm a developer in my 40s, have done a lot of projects in a lot of different companies. Having this experience gives me intuition about what to do in what situation. I just (think I) know when I should write a unit test and when not.
The 'all or nothing' attitude is created by people who want to be guru's and is followed by people who want to have some guidance.
This is not a bad thing, it takes time to shape yourself as a programmer. This takes years: http://www.norvig.com/21-days.html
Learn from the guru's, and be consious, people preaching an 'all or nothing' view of the world are NEVER right :)
Everything is always a trade-off. And it's the situation that can tell you which trade-offs you are prepared to make.
I like the phrase that your code, tests, etc, need to be "good enough". Because "good enough" is, you know, good enough. That way you can spend time on what matters the most.
Of course this 'easy to throw away' will also lead to a slow replacement process of 'good code' (that's easy to replace) with new 'bad code' (that's incrementally harder to replace) and eventually each software project will end up in the same broken mess, no matter how well it started ;)
The problem with modules that are easy to throw away is that the hole they leave is not so easily discarded. This negative space traces the module boundary; the interfaces between modules.
The only way to throw away interfaces is to throw away...? Everything. You will, anyway.
You need to keep a project clean. This means profound refactors when concepts for the product change. Some rules are:
- Keep it readable
- Don't over engineering
- Easy to remove
- Good tests (complete, min stubbing, interface oriented. Never refactor and change tests at the same time.
A lot of "veterans" are just resented with new paradigms and too egotistical to stay relevant. It takes effort to maintain a clean project, many will try to say "it's not worth it". That's just out of the question, it's your job, it doesn't have to be always pleasant or easy. If you don't suffer for it a bit now, someone else will in the future (maybe clients or users).
This is what I like about open source projects, it teaches maintaince discipline. In the end, you can sum up all rules into one:
- Make it easy to maintain
Sometimes, your new paradigm is actually something that they tried 10 years ago.
> In the end, you can sum up all rules into one: Make it easy to maintain
That is only one of the many facets of programming that you have to balance: performance, memory consumption, ... .
If you want to push your system performance to the extreme, believe me, "easy to maintain" is not going to happen. But that is the trade-off that you will have to make at that point.
Veterans know that everything is a trade-off.
Let me give you one extreme: you are writing a throw away prototype. How much time and effort are you going to spend (=waste) on making your code easy to maintain?
I'll give you my veteran answer:
- If you are in 100% control of the project, you know you will throw away the prototype, and you can throw things quickly together.
- If you have a manager, he will take 1 look at the prototype and say "Wow, it's almost complete! Don't start a new project, just add these features to your prototype and it's done.". So in that case, make the "throw away prototype" easy to maintain.
That is what veterans bring to the table: making trade-offs in specific situations. And they've been through a lot of situations.
I think code based performance is something that is just not as big as an issue as before. In web world it's common to solve extreme performance via cache, replication, etc.
In any case, I think that in very few cases you get to a performance requirement so intense that you can't write maintainable code. We can't argue on extremes :)
In my mind, the biggest change as I've grown in my career is that I've gone from judging code quality to judging code suitability.
For example - the application I'm working on right now, there is is a javascript file that initializes complex data tables using the jQuery DataTables plugin. It's several hundred lines of redundant code, with several functions that actually do the initialization differently depending on the classes that are applied to the table element. It is unquestionably "low quality" code, but I have no intention of attempting to refactor it. Why? Because there are customers whose UIs are dependent upon the bugs and undocumented assumptions that are baked into that code, and refactoring it would break things from their perspective. If I did that, by the time I dealt with all the bug reports and feature requests from customers, the nice clean code that I'd written would look like the code I have now. Instead, I've written a new JavaScript file to initialize data tables going forward, and all new instances use that. They are slightly visually distinct from the "old-style" versions so users know to expect the slightly different (but now consistent) behavior. I'm very resistant to adding features to the legacy code, and instead, offer to "convert" the legacy tables to the new layout one-by-one when a new feature is requested. Eventually, we may reach a point where we can retire the legacy code - but in all likelihood, that code will be there longer than I will work here. This approach of "walling off" code that has become unwieldy and difficult to modify is one of the things that I look for when determining where a developer is in their career.
> You need to keep a project clean. This means profound refactors when concepts for the product change.
If the purpose of a company were to produce the most excellent software possible, I'd agree with you, but the purpose of a company is to make a profit. If you do a "profound refactor" every time requirements significantly change you're likely to never launch a product at all, much less iterate quickly enough to build a profitable product. You have to learn to deal with cruft and manage its lifecycle, not try futilely to prevent it from ever occurring. Part of that is learning to break things down into discrete components and limit interdependencies so you can refactor each component in isolation, but another part is learning to segregate cruft that has accumulated and keep shipping without creating a mess that slows down how quickly you can iterate in the future.
> It takes effort to maintain a clean project, many will try to say "it's not worth it".
"Worth it" doesn't necessarily mean "worth it from the perspective of the developer". It can also mean "doing this right is going to take longer, and in order to meet our business goals we can't take the time". As long as the long-term impact of these decisions are passed on to the decision-makers on the business side, there are absolutely times when "it's not worth it" to write clean code.
> That's just out of the question, it's your job, it doesn't have to be always pleasant or easy.
Nope. Your job as a developer is to provide more business value than you consume. If you're getting paid $100k you must provide more than $100k of value or you will eventually be out a job, regardless of how clean your code is.
The only thing actually articulated was "you don't need 100% test coverage", which is not contrary to existing practice (afaik). I'm not sure the singular point nested in platitudes can be useful to me.
The only resource I know of that provides citations to studies is Code Complete by McConnell. But having data is no guarantee of correctness. Bossavit debunks some SWE common sense in "The leprechauns of software engineering", including the cone of uncertainty which is referenced by McConnell.
I'm also planning on reading SW engineering best practices by Capers Jones, which is likewise based on data, not anecdotes.
Even if it is just "tribal wisdom", is HN where you recruit a tribe from a vapor posting? I would hope you at least have a tribe before upvoting it up the pile. It feels like it's being promoted for no reason or through a backchannel.
It didn't even give a wiki link to the Chesterton's fence analogy. Sigh.
It would be nice, and it’s backed by various studies, but we’re not.
It's happened constantly at work though, throughout multiple jobs. Sometimes I've been so stuck I've got basically nothing done for days or weeks at a time. Sometimes that happens a few times in a row and I feel like I'm an inch away from being fired for incompetence. This even happens on relatively small projects. It's not confined to big 10 year old monstrosities.
The sad thing is, I work for startups, which should be all about lean, clean code and making big changes rapidly to respond to business needs.
I think a lot of it comes down to how hard it is to work on other's code. I don't think any of the conventional wisdom is a solution to this problem either. I haven't noticed any discernable difference between projects with big fuck off linters, or 50 page style guides. It's not a mish mash of semi-colons that's causing trouble (although consistency is nice).
"Dirty code" is really the crux of the problem. But like the author of this article, I don't always agree with what the conventional wisdom says about writing clean code.
To me, the most important part of keeping your code clean is always having the minimal solution to the problem. Your code shouldn't do one thing extra it doesn't need to. Half of this is YAGNI, half of it is being a good enough developer to come up with simple solutions that aren't too fancy.
On that note, don't get too fancy, and don't take DRY too seriously. Whatever complicated mess of higher order functions you're writing probably isn't going to phase other experienced devs in a vacuum, but combined with all the other icky parts of the code base and it might just be the straw that breaks the camel's back. If you need to write twice as many lines to make it more grokable, that's fine.
Speaking of YAGNI, people should be applying it somewhat to dependencies as well as functionality. It's just far too easy for a simple React project to blow out in complexity because of an over-reliance on other's code. Sometimes it's easier to build something out of code than NPM Lego.
Not to mention of the pedantism of one of the biggest proponents of clean code.
So, in essence, worry that your code has a certain quality but in the end all projects will end up in the trash can. It's software, not the Mona Lisa.
For instance, a simple algorithm for distributed consensus is probably not correct, and you should rightly not view it as 'clean'. Some things simply aren't simple. The goodness of simplicity is contingent on the complexity of the problem.
But aesthetics is at the final end of the spectrum: it's purely subjective. And yet we wrap all these things up under the term 'clean', and so it is confusing. People will defend their notion of cleanliness using the objective standards, and then apply it to the subjective cases.
Go back further, suddenly you will be left with basically nothing but survivor bias. It will seem like they had it together, but it is just as likely they did not. Pull it in some and you get the hastily built houses that seemed to fall apart way too easily.
I feel like this can go with software. It is actually trivial to find software that is still is use from the 80s. Older fortran code still exists. It is far harder to find any lessons in those software packages. Even when I desperately want to.
Modulo certain forms of static analysis, your code is ultimately only as good as it is well tested[1]. Non-automated tests do count here, but for long-maintained codebases the cost of automating pays for itself very quickly.
1: Note that pretty much all code has been informally integration tested (A "can I run it" type of smoke-test is a very simple integration test). Similarly all compiled languages (and any interpreted languages that parse an entire file before running code) has some static analysis, as syntax errors will be caught. Unless you check in code without compiling or running it, you are already doing some testing.
The comments here are being dismissive and then reiterating what the post said! Here's the summary from the post :
> So I stopped worrying about whether my code is perfect. And I just accepted that if I can't see any immediate flaws with the code, and if all the tests pass (whether automated or manual), then it's fine. And I trusted that if I ever come across a bug, I can fix it.
Why, specifically, is that a bad thing?
And yes, often clean code actually is finished. Mess is often produced by wrong architecture or rushed development which means not finished if maintainability is in your definition of code being done. What the author probably meant is concise code instead.
After that, did you completely switch back to the code style you were using before the courses? Did you cherry pick some things and not others?
Just to make my view plain, for me 'clean code' was the most important book I've read in my career, and what I learned from it has (in my view) massively positively influenced the quality of my code over the years. I'm really interested in getting an opposing viewpoint. At the end of the day I'm aiming for my code to be understandable and maintainable by other people as much as myself.
I'm not against the ideas, per se. But they have a cost. And at the end of the day the quality of the product you are building is far more important than any intrinsic quality of the product. Worse, it would be nice if they correlated, at the least. They don't seem to, IME. I'd be welcome to data showing otherwise. Make sure you understand your budget. And for the love of god, realize that idioms in the codebase and in the general programmer pool are more important than purity of some style.
My favorite example: Overdoing the short functions thing. I've seen this a lot. Unsurprisingly, considering that for most devs, when you ask them what makes good code, "short functions" seems to be the first thing that comes to mind.
Splitting code into extremely short functions has a few disadvantages too: a) in what order they're called is not immediately clear, b) where they can be called from is not immediately clear, c) going up/down the stack can make it harder to follow when debugging interactively. d) It increases the LOC and noise. And e) especially in OOP languages, short methods make it more tempting to turn variables into attributes to avoid passing them around explicitly (bad due to longer lifetime).
Splitting functions should only be done if it makes sense semantically. Each function should make sense on its own. If some logic is highly cohesive (e.g. because it implements a specific algorithm), not independently reusable, and it's subblocks only make sense in one order, and it all fits on a few screenfuls, it might make sense to keep it in one longer function instead of dividing it into fairly arbitrary chunks.
Same problem with rules. Document why they exist.
Simple coding example: the one exit rule. It exists to make easier the resource management in a function. So it is useful in languages for which the coder has to manage resources. For a scripting language or one where resources are managed by the language? No. So there you can enjoy early returns.
- clean != finished
- clean != perfect
- ugly is not clean
- you can understand a code and find it still ugly
What is he actually talking about?
> Automated tests are not that important
I agree with the content of this section, but the title is bad.