I've experienced more issues caused by management passing around tasks between teams and never paying attention to knowledge and knowledge transfer.
What's amazing, is that in over 18 years as a software engineer, I've seen this so many times. Teams will function well, then the institution tries to change. Often they will try to open up the "innovation" by throwing money at R&D, basically trying to add bodies in order to grow. Then you have tons of teams, and communication becomes very challenging, so then they grow some kind of "task management" layer. Management that never understands who actually _knows_ something, just tracks how much "theoretical bandwidth" they have and a wishlist of features to create. And then the crapware really starts flowing. And then I get bored and move on to the next place.
The company I work for uses Scrum. They consider the User Stories + the code to be everything you need. I struggle with this, but my manager says they don't want to get tied up doing documentation "because it goes out of date". Beside, they are being Agile which "prefers working code over comprehensive documentation".
I am wondering what other companies do to capture this "distilled knowledge". The backend services I rely on are undocumented beside some paltry swagger that leaves much to be desired. The front end has no product-level "spec", if you want to rebuild the thing from scratch. There isn't even a data dictionary, so everyone calls the same thing by different terms (in code, and conversation).
There are just user stories (thousands) and code.
Does anyone have any suggestions on how to fix this?
Documentation is essential. How things work is an important thing to document. Ideally it should be in version control and be generated from the code, because then it's less likely to go out of date. It still has problems (What do you do when the code and the documentation disagree? Which is correct?), but they're not as severe as the problems that arise when there is no documentation at all.
What is less useful is having comprehensive documentation for those things that are yet to exist. Writing a few hundred pages of specification and handing it over to the dev team is waterfall, and it is _this_ that the Agile manifesto signatories were interested in making clear.
I'd fix it with strategic DDD - I'd develop at least a "ubiquitous language" (or a UL): I'd get others to work with me on having clear terminology and making sure that is used consistently both in user stories and in the code base. That's table stakes.
I'd then event storm the contexts I'm working in and start to develop high level documentation.
Even at this point relationships between systems emerge, and you get to draw circles around things and name them (domains, contexts), and the UL gets a bit better. At this point you can start to think about describing some of your services using the UL and the language of domains and contexts.
By that point, people should start to click that this makes life easier - there is less confusion, and you're now all working together to get a shared understand of the design and the point of DDD is that the design and the code match.
The first part (all 100+ pages of it), of the Millet and Tune book on DDD will pay you dividends here.
If that doesn't work, look around for somewhere else to work that understands that software development is often a team sport and is committed to making that happen.
I've come across the "documentation becomes quickly outdated" argument a lot, but nobody has ever been able to suggest a good alternative. The best I've found is to write design logs for proposed changes (which you then let other team members/stakeholders can review/comment on before it gets implemented) and decision logs for any decisions that are made. This way, them going out of date is expected and ok, as they become a history of ideas and decisions with their context and outcomes laid out. You don't necessarily have a snapshot of "the system right now" but you have a log of all the ideas and decisions that lead up to the current system.
Not to be rude, but yes: switch employers. This is not something you can fix on the employee level, it is a management issue.
Agility requires a stable foundation. And a lot of places forget that.
Then they are Doing It Wrong™. Note that there's nothing in the Agile Manifesto OR the Scrum Guide that says "don't write documentation." The closest you get is in the AM where it says "We have come to value ... Working software over comprehensive documentation". But note that immediately after that it says "That is, while there is value in the items on the right, we value the items on the left more." IOW, the Agile Manifesto explicitly endorses the value of documentation!
Remember this the next time somebody tries to tell you that "we don't do documentation because we're Agile." Anybody running that line is Full Of Shit™.
Have a product wiki (e.g. MediaWiki).
Have documentation in source code that compiles to HTML code, which can be linked to/from the product wiki (e.g. JavaDoc in Java, Natural Docs for languages that do not directly support compilable documentation). Make building and publishing this documentation a part of the continuous integration.
When you have this, make it a part of code reviews to ask "where is this documented?" for those kinds of things that are easy to remember today, but no one will remember it a few months later. In other words, make it a "code+doc review".
(Don't be dogmatic about whether the information should go to code documentation, unit test documentation, or wiki. Use common sense. If it only related to one method, it's probably the code; if it related to a use case, it's probably the unit test that verifies that use case; if it is a general topic that has an impact on many parts of the program, it probably deserves a separate wiki page.)
This is funny because “working code” might just mean that it doesn’t crash. But does it actually do what it’s supposed to do or does it reliably deliver the wrong results? How would you know without documentation?
The software in the Therac didn’t crash, it quite reliably killed people with its “working code”.
I might also recommend creating user stories for non-feature development like infrastructure and tech debt paydown (if you don't already). That way, all of the value flow is captured in one place and you're not just leading managers to see new features only.
Second, in addition to the user stories I'd advocate for strong background information about the context of the story as well as detailed acceptance criteria if you don't have that in place already.
If you have sufficiently detailed user stories, they can be.
I think about that whenever I get frustrated about a vague spec or lack of details. It's the job!
I hope he meant separating out the ambiguity rather than concentrating it. :)
"...the designers job is not to pass along "the design" but to pass along "the theories" driving the design. Knowledge of the theory is tacit in owning..."
Well said. Thank you!
I know as a software developer you don't want to do that. More fun refactoring code than dealing with management. More fun writing that piece of SQL than sitting in a meeting. Easier to whine about missing specifications than to understand the big picture.
Once I stepped back from coding and looked at the software from a birds eye view, I had actually a much easier time programming features than before. More knowledge, less writing code.
Being a part of the early decision making processes has been a challenge for me as a remote employee. In larger companies, there are lots of meetings, discussions, and decisions that happen before anyone talks to the engineering staff is brought in. But, by basically being nice, asking questions, and really getting involved, I've been able to "weasel" my way into some of these discussions.
Once you get involved early on, there's so much more clarity around the one liner "requests" that often get farmed out.
communicationChannels = nrOfTeams(nrOfTeams-1)/2
More people should read The Mythical Man-month
Managers are very unhappy when I tell them of all the knowledge I've developed.
- the problem you are trying to solve
- how you could solve it
- how you actually did solve it
- which solutions come with which flaes and merits
“Beware of bureaucratic goals masquerading as problem statements. “Drivers feel frustrated when dealing with parking coupons” is a problem. “We need to build an app for drivers as part of our Ministry Family Digitisation Plans” is not. “Users are annoyed at how hard it is to find information on government websites” is a problem. “As part of the Digital Government Blueprint, we need to rebuild our websites to conform to the new design service standards” is not. If our end goal is to make citizens’ lives better, we need to explicitly acknowledge the things that are making their lives worse.”
The following is a wonderful point I have hardly ever heard said directly:
"The main value in software is not the code produced, but the knowledge accumulated by the people who produced it."
"value your knowledge workers"
"your employees are your most valuable asset"
Some companies don't treat employees well, and some employees at good companies feel they are not treated well enough
If the above quotes do not strike a chord with you, you might just be a software engineer who thinks you're more important than non-SEs.
I'm sure another author has put the same sentiment out there before, but it's not every day I see such a nice phrasing of it.
It also introduces unknown amounts of debt and increases the likelihood that you'll end up with intractable performance/quality/velocity problems that can only be solved by re-writing large portions of your codebase.
This can be a dangerous cultural value when it's not presented with caution, which it isn't here. I think it's best to present it alongside Joel Spoelsky's classic advice: "If it’s a core business function — do it yourself, no matter what".
https://www.joelonsoftware.com/2001/10/14/in-defense-of-not-...
```
The best advice I can offer:
If it’s a core business function — do it yourself, no matter what.
Pick your core business competencies and goals, and do those in house. If you’re a software company, writing excellent code is how you’re going to succeed. Go ahead and outsource the company cafeteria and the CD-ROM duplication. If you’re a pharmaceutical company, write software for drug research, but don’t write your own accounting package. If you’re a web accounting service, write your own accounting package, but don’t try to create your own magazine ads. If you have customers, never outsource customer service.
```
This all rings true in my experience. You should write the software that's critical to your core business competency yourself, because the maintenance cost is worth paying if you can achieve better software. But if it's not a core competency and your business isn't directly going to benefit from having best in class vs good enough, then it may be worth outsourcing.
(1) If a problem can be exhaustively specified in a formally well-defined way (mathematical logic), it will be wise to adopt a mature implementation - if it exists.
(2) If a problem can't be so specified, all implementations will be incomplete and will contain trade-offs. I have to address these problems myself to ensure that limits and trade-offs suit as well as possible what the business needs. If I can.
So, (1) says I shouldn't parse my own JSON. (2) says I should avoid the vast majority of what shows up in other people's dependency trees.
The author seems like an unknown in the software development world, but they’re one of the managers for Singapore’s fairly successful digital government initiative. So it does feel safe to say they have some experience.
I suppose he wrote this for other people in the Singapore civil service.
In python, I typically follow a pattern of keeping stuff in __name__ == '__main__' block and running it directly, then splitting to functions with basics args/kwargs, and finally classes. I divide into functions based on testability btw. Which is another win, since functional tests are great to assert against and cover/fuzz with pytest.mark.parameterize [1]
If the content of this post interested you: Code Complete: A Practical Handbook of Software Construction by Steve McConnell would make good further reading.
Aside: If the domain .gov.sg caught your eye: https://en.wikipedia.org/wiki/Civil_Service_College_Singapor...
I like to do it early also, to make sure that the new script, if imported from by a sibling module, is inert.
An example would be a scripts/ folder and sharing a few functions between scripts w/o duplicating.
In some cases I don't have a choice. Initialization of a flask app/ORM stuff/etc has to be done in the correct order.
I think the general rule of thumb I follow is: avoiding keeping code that'd "run" in the root level. Keep it in blocks (normally to me functions) has the added effect of labeling what is does.
What I don't do: I don't introduce classes until very late. In hindsight, every time I tried to introduce a complicated object model, I feel I tended to overengineer / encounter YAGNI
http://theindependent.sg/li-hongyi-singapore-has-a-lot-of-pr...
I came across this article: https://mothership.sg/2015/03/lee-hsien-yang-reveals-the-sto...
> I have taught my children never to mention or flaunt their relationship to their grandfather, that they needed to make their own way in the world only on their own merits and industry.
I keep on saying that Software Literacy is a real thing. And that this current generation of leaders are like Charlemagne - he was the first Holy Roman Emperor and the last who was illiterate.
Interesting to see it in practise
A good example of this is:
- Add worker thread for X to offload Y
When the actual problem is more along the lines of:
- Latency spikes on Tuesdays at 3pm in main thread
Which may be caused by a cronjob kicking off and hogging disk IO for a few minutes.
A good rule of thumb I've found is that task tickets tend to have exactly one way of solving them, whereas problem tickets can be solved in many ways.
So in that case, I guess either run the job with a lower priority and see if that helps, or execute the job more often so it doesn’t have to catch-up all at once one time per week, or rewrite it so that it performs I/O with smaller chunks of data at a time and sleeps for a little while in-between reading or writing chunks of data. Basically, do something so that you no longer have this one huge job consuming all of the IO bandwidth for several minutes every week.
There was one periodic job that we moved from the production server to work off the daily backups instead of the live server.
It's not something anyone can diagnose from what you say, it could be anything, even weirdness such as a hardware fault kicked off by something else (office cleaner plugging something in?) causing power spike RF interference affecting the network causing mass packet drops and retries (ok, unlikely but it's not impossible, I've heard of such).
1. Making sure what you build is what was really requested (correct), and
2. Making sure what you've built doesn't have a higher running "cost" than the thing it replaced (either manual process or old automated solutions).
Everything else, IME, is ancillary. Performance, choice of platform, frameworks, methodology to build, maintainability etc are sub-objectives and should never be prioritized over the first two objectives. I have worked on many projects where the team focussed mostly on the "how to build" parts and have inevitably dropped the ball on the "what" to build of the projects. Result: failure.
Sauce: personal experience with several years of different projects (n = 1; episodes = 20+ projects that have gone live and have remained live versus 100+ projects lying by the wayside).
Writing software is not easy.
This is where most companies fail. Yes, they do want the best developers, but for the budget of an average junior/medior dev.
For some reason most companies/managers I worked for do not understand the financial impact of a not so good developer. Or the other way around; they fail to value the best developers and are unable recognize them.
I've worked for plenty companies where they let mediocre dev's build a big app from scratch (including the architecture), in an Agile self managed team.. These are the codebases that always need to be rewritten entirely because they have become an unmanageble buggy mess of bad ideas and wrong solutions.
If every single company wants that, where is he space to grow and learn from mistakes?
Maybe I'm wrong but I think those "mediocre dev's" learned a lot building a big app from scratch, solving bugs and refactoring.
Then the project turns out to be months late, even though I called the timeline of the project virtually unfeasible, and we have to go back and make several changes that could've been caught early on with a better strategy.
The problem with hiring the "best" engineers is as follows:
1. Nobody can ever tell you what the best means. People just throw 10x around without any explanation.
2. Most people in the world are average. You simply don't have enough of the best people to handle the work load, even if they're 10x average. So much existing software and new problems exist that it's nigh impossible to have the best everywhere.
3. Many of the best people are able to write really good code, but they consider it so easy that they often write code that they think is correct and it gets put in production. Since they're loners, they often don't do the necessary leg work either because of their own arrogance, or because the company hasn't clearly defined its processes and the developer can't even reach this goal despite numerous efforts. So management just believes the code is correct without any verification.
4. Many average developers support the best ones by taking needed work away from them through comparative advantage. Just because X employee is awesome at Y task, doesn't mean he meets the highest utility by doing Y task all the time. Especially when there are conflicting priorities.
5. The best engineers aren't going to be working at a small company in most cases. They also aren't likely to be paid well outside a large company either. The article sites Google, Facebook, and all the large tech companies and their supposed stringent interview process as a reason. But these companies have written terrible software (Google+, AMP pages) and become ethically compromised easily. Plus their interview process is often so outside the daily work flow because it involves answering algorithm questions, that it often makes no sense. Even worse, it teaches people to do katas instead of build actual projects. Project based interviews make much more sense.
6. Rewriting code bases is one of the worst things you can do and is what caused Netscapes downfall. Companies with supposedly the best engineers (ie. Netscape), can't even do it well.
So while hiring the best engineers is an awesome goal. It isn't feasible in a lot of cases.
I admit I have some bias as I consider myself pretty average. But I do a lot of crap on the side that "10x devs" don't even hear about because they're working on something more urgent. Does that mean I'm worthless?
it won't help you to have 11 'Lionel Messi's on your team. good compatibility among players is much more preferable. It's probably better to have small robust teams that can work together, ppl who are avg in most required areas and are rockstars in certain specific ones.
The claim is:
> Stakeholders who want to increase the priority for a feature have to also consider what features they are willing to deprioritise. Teams can start on the most critical objectives, working their way down the list as time and resources allow.
In other words, the argument is "competing priorities in a large-scale project make it more likely to fail, because stakeholders can't figure out which ones to do first." Actually, in this very paragraph, the author glosses over the real issue: "Teams can start on the most critical objectives, working their way down the list" - treating development as an assembly line input-to-output process.
I argue that it's not time constraints that complex programs bad, but instead the mere act of thinking that throwing more developers at the work will make it any better. Treating the application as a "todo list" rather than a clockwork of engineering makes a huge difference in the quality of the work. When developers are given a list of customer-facing features to achieve, more often than not the code winds up a giant ball of if-statements and special cases.
So yes, I do agree that complex software is worse and more prone to failure than simple software - but not for the reason that there's "too much to do" or that prioritizing is hard. Complex software sucks because it's requirement-driven, instead of crafted by loving hands. No one takes the time to understand the rest of the team's architecture or frameworks when just throwing in another special case takes a tenth of the time.
There are different personalities of engineers, those who thrive on explicit requirements and can accomplish difficult engineering tasks when they are given clear requirements. But those engineers should only be given those requirements once the job that the customer is trying to get done is clearly understood. Some engineers have the ability to find creative solutions, that customers or product managers can’t see, when they are provided with problems and jobs rather than requirements and tasks.
Managers would be wise to distinguish between the type of engineers they are managing and play to their strengths. Whatever type you have, understanding the job the end user is trying to get done must occur, preferably by an engineer that’s capable of articulating that, if needed, to team members as technical requirements.
> There are engineers who can accomplish difficult engineering tasks when they are given clear requirements and engineers have the ability to find creative solutions when they are provided with problems and jobs rather than requirements and tasks.
I feel like I could perform adequately in either environment. The problem is I've previously found myself in environments where I'm expected to come up with creative solutions to a problem, but I have no access to the customer or even a simulated environment where I could try to do something similar to what a customer would do.
In this kind of case, it's impossible to really know how to articulate your requirements, because all you can use is a fantasy model of hypotheticals. But requests for more precise requirements are potentially brushed off as wanting to be spoon-fed what you need to do and having inability or unwillingness to think creatively.
I argue that it's not time constraints that complex programs bad,
but instead the mere act of thinking that throwing more developers
at the work will make it any better.
The bit about throwing more developers is true, but really does not follow from anything else you or the author is talking about. Treating the application as a "todo list" rather than a clockwork
of engineering makes a huge difference in the quality of the work.
When developers are given a list of customer-facing features to achieve,
more often than not the code winds up a giant ball of if-statements
and special cases.
Admittedly, this is often the case when doing feature-driven development.But it absolutely does not need to be the case.
If you treat engineers as interchangeable cogs who only need to know about one story at a time, and never tell them about the medium- and long-term goals of the business and the application? Then yes. Then you get an awful code base with tons of if-then crap.
However, it doesn't need to be this way. If you give engineers visibility into (and some level of empowerment with regard to) those longer-term goals, they can build something more robust that will allow them to deliver features and avoid building a rickety craphouse of special cases.
I have experienced both scenarios many times.
This is a misinterpretation of the article's claim. The article very explicitly begins by saying that the best recipee to increase a project's chances to success is to:
> 1. Start as simple as possible;
> 2. Seek out problems and iterate;
The priority part reads to me as a way to determine which features are critical (and hence part of the as simple as possible set) and which ones are not (and hence you should not build "yet"). The underlying vibe being that these other features should probably never get implemented because once the critical ones get built and the software is put to use you will actually find other critical fearures that solve actual problems found through usage.
That is, only when you find that one of the initially non-critical features has become a hindrance for users actually using your software you should seek to implement it.
I really think this would be a better way to build software, just as much as I think that you will have a very very hard time getting any management on board with it...
This means that instead of lots of issues with business logic being separate from the data the business logic and data sit together and prevent your system from getting into bad states.
Thinking about this, maybe I just stole this thought from Derek Sivers: https://sivers.org/pg
A database in my opinion is not a good place to write business logic with functions and triggers, since there is lack of tooling that would make development and debugging easy. Let the database do what it does well, which is storing and querying data.
This all takes a bird-eye view and a long perspective, very unlike quarter-results-driven development.
This one struck me, because as soon as I read it I knew it was true yet had never considered it:
> Most people only give feedback once. If you start by launching to a large audience, everyone will give you the same obvious feedback and you’ll have nowhere to go from there.
I've been on both sides of that fence and it rings true.
This article is full of good ideas, an antidote to creeping corporate take over of software projects - make this required reading for software projects.
The problem is lack of knowledge. The successful projects mentioned above did not have a lack of knowledge, and so they were finished successfully.
When there is a lack of knowledge, then it makes sense to use the iterative approach...as knowledge is slowly gathered, the software gets improved. As with all things in life!
But starting a "gather requirements - write software - deliver it" lifecycle because you are confident that you have all the knowledge is one as well.
Now we have government digital systems leading the charge across most western countries, and we have excellent polemics like this. I am just so happy to see this level of insightful ness at top levels of government.
I am so glad they listened to me :-)
This is spot on, and very much my experience (of the good engineers I've come across).
Kind of : management had planned extensive and painful testing of a component that turned out to be discarded entirely (not because of functionality reduction but because it was actually unecessary).
Reusing good modules and software will make the software work.
Kiss engineering still works keep it simple stupid. Make it as simple as possible. Simple software and systems are easy to maintain and understand.
Use modules as these can be swapped out.
Use proven boring technology such as SQL and JSON. Boring tech has been tried by others and generally works well.
What makes you think so?
Translation: the successful tech companies have so much poorly documented legacy enterprise spaghetti code and tooling that they need the best talent they can get just to make sense of it and maintain it
* has a better grasp of existing software they can reuse
* (has) a better grasp of engineering tools, automating away most of the routine aspects of their own job
* design systems that are more robust and easier to understand by others
* the decisions they make save you from work you did not know could be avoided
I obviously concord with the analysis (not sure about the 10X myth). It also states that: * Google, Facebook, Amazon, Netflix, and Microsoft all run a dizzying number of the largest technology systems in the world, yet, they famously have some of the most selective interview processes
This sounds a bit like a paradox to me. Given the current state of "selective interview processes" (algo riddles, whiteboard coding, etc.), none of the above traits can be easily evaluated in a candidate during an interview. On the other hand, these companies do hire stellar engineers: the technological supremacy of FAANG is irrefutable.Google views picking new engineers like picking quality construction metals. In the end, the machine melts you down and hammers you into a pristine cog.
I do think perhaps there is too much emphasis on reuse and particularly cloud services. Ironically, this is partly for the reasons given elsewhere in the article. If you rely on outsourcing important things, you also naturally outsource the deep understanding of those important things, which can leave you vulnerable to problems you didn't anticipate. Also, any integration is a source of technical debt, so dependencies on external resources can be more fragile than they appear, and if something you rely on changes or even disappears then that is a new kind of potentially very serious problem that you didn't have to deal with before. Obviously I'm not advocating building every last thing in-house in every case, but deciding when to build in-house and when to bring something in can be more difficult than the article here might suggest.
Perhaps some software development techniques would work though...
> The main value in software is not the code produced, but the knowledge accumulated by the people who produced it.
Those people go on to work on other things or for other organization. So, while that statement might have some truth to it, it's still the case that the code has to be useful, robust, and able to impart knowledge to those who read it (and the documentation).
> Start as Simple as Possible
That's a solid suggestion to many (most?) software projects; but - if your goal is to write something comprehensive and flexible, you may need to replace it with:
"Start by simplifying your implementation objectives as much as possible"
and it's even sometimes the case that you want to sort of do the opposite, i.e.
"Start as complex as possible, leading you to immediately avoid the complex specifics in favor of a powerful generalization, which is simpler".
> Perhaps some software development techniques would work though...
As you go up the management chain, you usually run into some layer where people are traditional managers, who want to run a software project like a traditional project. And behold, you're at this problem. Saying "software development techniques would work" is useless unless you can get those managers to change. And when you get them to change, the problem moves up one layer.
When faced with a standard solution, use a standard component if you can. If you can't use a standard component, build a standard component. Keep your components simple, well-understood, and easy to maintain.
...While I do agree that "project-management" is important, I think the tools we are using today are really underpowered to deal with complexity/human-error - Which is the bigger problem IMO.
The problem is most CEOs see the binary as the asset, not the knowledge gained. I've tried to explain this concept to multiple startup CEOs, who hire outside development firms, for which it rarely works out for them.
Or the management techniques considered “traditional” are overlooking a century of iterative development outside of software. See Deming.
This site is an empty page without JS.
This is also the real problem with vendor lock-in.
You are more often locked in by the knowledge of your employees than by your tech stack.
What is the definition of "best engineers"? Those with extensive experience? those who follow design patterns and coding standards religiously? those who solve algorithms on a whiteboard? I would like to see if there is a definition for this.
I would say build the right culture (collaborative, always learning from mistakes and revise decisions and no blame or pointing fingers).
You can get a bunch of great coders/engineers _who follow code standards, break down codes to zillions of functions/methods ... etc_ but will fail to work together and conflicts will raise quickly.
Industry this days is more about headcount than quality itself. Why hire two good engineers when you can have three mediocre ones for the same price?
On simplicity, common wisdom these days dictate that we should use bloated kitchen-sink backend MVC frameworks that generate dozens of directories after `init`, because supposedly nobody knows how to use routers. Frontend compiler pipelines are orders of magnitude more complex than the reactive frameworks themselves, because IE11. And even deployment now requires a different team or expensive paid services from the get go. We're definitely not seeking simplicity.
The second point is also something that most developers and managers would balk at: "To build good software, you need to first build bad software, then actively seek out problems to improve on your solution". Very similar to the Fred Brooks "throw one away" advice that no one ever followed.
I've seen plenty of poor decisions that cause 10x the work, and end up with something 10x less maintainable.
You have entire blog posts by Steve McConnell of Code Complete fame devoted to defending the 10x claim by citing 20 to 50 year old research that shows 5x to 20x differences across certain dimensions and then him falling back to the 10x thing. Not one single sentence where he is being self aware enough to spell out the most likely reason for "10x" being so prominent: 10 is the base of the decimal system and as such psychologically attractive to use.
> Both Steve Jobs and Mark Zuckerberg have said that the best engineers are at least 10 times more productive than an average engineer.
I know I'm venturing into ad hominem territory with this, but first of all: Steve Jobs wasn't a programmer. Mark Zuckerberg, well does he even qualify as a programmer nowadays? How well can he quantify programmer productivity? His decision to use PHP led Facebook to create HHVM and Hack. Is this the 10x developer way?
Anyways, the question to me is: Is it possible for average software engineers to write good software?
> The project owners start out wanting to build a specific solution and never explicitly identify the problem they are trying to solve. ...
At this point, it looks like the article will reveal specific techniques for problem identification. Instead, it wraps this nugget in a lasagna of other stuff (hiring good developers, software reuse, the value of iteration), without explicitly keeping the main idea in the spotlight at all times.
Take the first sentences in the section "Reusing Software Lets You Build Good Things Quickly":
> Software is easy to copy. At a mechanical level, lines of code can literally be copied and pasted onto another computer. ...
By the time the author has finished talking about open source and cloud computing, it's easy to have forgotten the promise the article seemed to make: teaching you how to identify the problem to be solved.
The section returns to this idea in the last paragraph, but by then it's too little too late:
> You cannot make technological progress if all your time is spent on rebuilding existing technology. Software engineering is about building automated systems, and one of the first things that gets automated away is routine software engineering work. The point is to understand what the right systems to reuse are, how to customise them to fit your unique requirements, and fixing novel problems discovered along the way.
I would re-write this section by starting with a sentence that clearly states the goal - something like:
"Paradoxically, identifying a software problem will require your team to write software. But the software you write early will be quite different than the software you put into production. Your first software iteration will be a guess, more or less, designed to elicit feedback from your target audience and will deliberately built in great haste. Later iterations will solve the real problem you uncover and will emphasize quality. Still, you cannot make technical progress, particularly at the crucial fact-gathering stage, if all your time is spent on rebuilding existing technology. Fortunately, there are two powerful sources of prefabricated software you can draw from: open source and cloud computing."
The remainder of the section would then give specific examples, and skip the weirdly simpleminded introductory talk.
More problematically, though, the article lacks an overview of the process the author will be teaching. Its lack makes the remaining discussion even harder to follow. I'll admit to guessing the author's intent for the section above.
Unfortunately, the entire article is structured so as to prevent the main message ("find the problem first") from getting through. As a result, the reader is left without any specific action to take today. S/he might feel good after having read the article, but won't be able to turn the author's clear experience with the topic into something that prevents more bad software from entering the world.
Why?
Because there is no formal definition for what is bad or good software. Nobody knows exactly why software gets bad or why software gets good or what it even exactly is... It's like predicting the weather. The interacting variables form a movement so complex that it is somewhat impossible to predict with 100% accuracy.
What you're reading from this guy is the classic anecdotal post of design opinions that you literally can get from thousands of other websites. I'm seriously tired of reading this stuff year over year rehashing the same BS over and over again, yet still seeing most software inevitably become bloated and harder to work with over time.
What I want to see is a formal theory of software design and by formal I mean mathematically formal. A axiomatic theory that tells me definitively the consequences of a certain design. An algorithm that when applied to a formal model produces a better model.
We have ways to formally prove a program 100% correct negating the need for unit tests, but do we have a formal theory on how to modularize code and design things so that they are future proof and remain flexible and understandable to future programmers? No we don't. Can we develop such a theory? I think it's possible.
The Applied Category Theory folks have some very interesting stuff, like Categorical Query Language.
https://www.appliedcategorytheory.org/
https://www.categoricaldata.net/
But it sounds to me what you mean is more like if "Pattern Language" was symbolic and rigorous, eh?
Also the sentence 'algorithms that applied to algorithms produce a better model' has a strong smell of halting problem, at least to this nose.
Intuitively, software can be modeled as a graph of modules with lines representing connections between modules. An aspect of "good software" can be attributed to some metric described by the graph, let's say the amount of edges in the graph... the less edges the less complex. An optimization algorithm would probably take this graph as an input and output a graph that has the same functionality but less edges. You can call this a "better design." This is all really fuzzy and hand wavy but if you think about it from this angle I'm pretty sure you'll see that a axiomatic formalization can be done along with an algorithm that can prune edges from a graph (or in other words, improve a design by lowering complexity)
A computer program is a machine that translates the complexity of the real world into an ideal system that is axiomatic and highly, highly simplified. Such a system can be attacked by formal theory unlike real world issues like what constitutes a good car.
It seems we are on the path to repeat history with software engineering, what with how software and the internet is being developed with such little regard for public safety and long term consequences.
Unfortunately, it appears that the "free love" phase of software engineering is coming to an end, as society now relies more and more on software and major tech players for life and safety. It's starting to get real for software engineering.
Luckily, other engineering fields have been here before, so this sort of transition shouldn't be anything new.
Relevant Tom Scott video: https://www.youtube.com/watch?v=LZM9YdO_QKk
> Unfortunately, it appears that the "free love" phase of software engineering is coming to an end, as society now relies more and more on software and major tech players for life and safety. It's starting to get real for software engineering.
Software will always be a spread of reliability requirements, from pacemakers on one side to excel reports on the other. Part of being a responsible user is choosing software with the right balance of economics and reliability for the job.
This is bad advice. It's like saying "go into a bar and start picking up fights".
If some part of the software has problems, runs slow or has bugs but nobody is complaining, then there's no problem. Why waste time improving it?
Almost 100% of the time when you solve a problem you just create new problems of different kind in turn.
Be lazy. The less code you write the better off you are.
This depends very much on context. To pick an extreme example, if you're writing the control software for a nuclear weapon and you know you have a bug that might cause it to activate unintentionally if you eat a banana while it's raining outside, I think we can reasonably agree that this is still a problem even if so far you have always chosen an apple for lunch on wet days.