You start with some new head honcho somewhere. A CxO, an Enterprise Architect,... These tend to swap every 3 to 5 years.
Head honch sees horrible bloat, and decides to Act with some Master Plan. This entails buying some expensive software, deployed by a random external team, that will solve everything.
In practice, expensive software tends to barely work. Also, the deployers have no idea what the company is doing. But, anything that might be bad news is career ending, so things get deployed swiftly.
Then comes integration. External team chooses some integration point, probably somewhere in the last head honcho's expensive software, as that's the only thing where the design is not yet completely forgotten. There will be impedance mismatch, i.e. bad news, i.e. unspeakable. So people do something, anything to forcibly connect A to B.
Someone presses start, then the deployers run away in 2 weeks tops. All kinds of weird crimes are done by the new expensive software. Bad news is still not welcome, but things start to hurt more and more over the next few months. Staff was already overworked, so does some quick and dirty fixes. Head honcho falls out of grace, a reorg destroys every shred of knowledge gathered in the exercise, and a new new head honcho floats to the top. The cycle starts again.
Well, they probably didn't run away - they were probably only paid for two weeks tops. The only constant I've ever observed in 30 years of software development is that the people who make decisions think that saving a few thousand dollars in programmer salaries is worth having a business that nobody really understands, that operates at minimal efficiency, and generates unhappy customers. God forbid anybody ever treat highly educated programmers as competent professionals and equal partners in the business.
/S
Dev: Hey Steve, I'm working on issue #4546, but it just occured to me that that if I could just refactor that one method in SuperFactory it'd make code much cleaner and easier to reuse. Just a quick fix!
Manager: No. Work on #4546.
Dev: Sure, #4546 will be done soon, but it'd be really easy fix, it just occurred to me yesterday that there's a better way to build things with SuperFactory.
Manager: No! We already closed that issue!
Dev: No problems. But I thought that now that I have some extra time until...
Manager: Look, Dave, it's working as intended, the solution was reviewed and accepted. I will not create another task. You'll take #7839 next!
[...]
Manager: Hey, Dave, I recall you had some ideas about SuperFactory. It's been acting up lately, they keep creating tickets.
Dev: Nope. None. All gone now.
Manager: But you had, right?
Dev: Yes, but I'd have to start digging in again and I don't have time for that.
Manager: Oh, ok, you're right.
It becomes an issue if it takes more than a day. Scrum, Kanban, RUP, XP, waterfall - whatever "methodology" they say they're following, it boils down to "tell me how long this is going to take and I'll check to see how close what you said was to the time it took". If you can make a change in an hour, sure. If it takes a day, it's going to break your "commitment".
Dev: Hey Steve, I'm working on issue #2312, but it just occurred to me that that if I could just refactor that one method in SuperFactory it'd make code much cleaner and easier to reuse. Just a quick fix!
Manager: Huh. How much more work is it? If you can time-box it to half a day then go ahead.
Dev: Great, it should take just a couple of hours!
The change is merged and deployed, and several weeks go by...
Data Science: Hi team, we are wondering if you know if anything changed in this module in the past couple of weeks. The numbers from non-English speaking domains tanked.
Manager: Uh oh
Dev: Uh oh
Data Science: We just look at data in aggregate with significance only reached with weeks of collection. But at this point it looks like we lost millions of dollars.
Manager: oh shit
Dev: faints
... followed by weeks of post-mortems, meetings, process improvements, if not outright terminations.
Some contrived example where you might lose significant money because you made a generally good change but it had a bug and that bug was somehow missed by your entire review and testing process and the consequence of that bug was able to go unnoticed for a long time in production and then the result was disastrous isn't really a very compelling counter-argument.
If you subsequently hold weeks of post-mortems, meetings, process improvements and outright terminations, the person who made the otherwise useful change that had a bug should be among the last to get called out, somewhere after the entire management chain who utterly failed to competently organise critical development and operations activities, everyone responsible for QA who couldn't spot such a critical problem early, and everyone involved in the data science who ran such a hazardous experiment without taking better precautions around validity.
If you work without tests/QA, you are shooting from the hip. The scenario above as-such should not happen. If it ties in with million-dollar processes, even more so. What you are saying is "We don't trust our process, so we do as little as possible outside authorised tasks"; Instead, you should fix the process. If this led to post-mortems and process improvements, as in QA/dev process, not simply bug-fixes, then why is the process not improving and/or better trusted now?
Also, the original task is described as a "refactor", so the numbers should not be affected - was was it not just a refactor?
Manager: Sounds great, let me know when you are finished.
For example, you need a process to export data every 4 hours, with some visibility of success and failures. I could have written a cron job/scheduled task in 4 hours and be done. What I found instead is Kafka with node.js and couch.db. Yes, for that one export. Not only that the were paying monthly for the Kafka. Soooooo, it got replaced.
I've seen this a lot more in the last 10 years. I call them "stitcher programmers." They are near useless at providing solutions unless they can stitch together some byzantine Frankenstein's monster from existing tech, usually with extreme overkill. On the front end is the worst with React and thousands of dependencies for simple forms.
Right sizing a solution is not in their vocabulary.
* You can't fearlessly patch the cron box without knowing when the jobs run, I don't want any special cases. Also how would I even know when you have jobs scheduled looking at a fleet -- read your out of date docs? Ew no. So a messaging system, guess the devs were already familiar with Kafka, is necessary to process the jobs across multiple nodes.
* Individual nodes are unreliable and you don't have any durable persistent storage. Most people don't like storing data in Kafka even though it's possible so they went with a database.
* Cron doesn't have any mechanism to give you a history of jobs that isn't built into your script or parsing logs. Ditto with failure notifications. You also can't reprocess failed jobs except manually. Guess you just wait another 4 hours?
* You now also can't duplicate the server because they're both going to try and do the export every 4 hours and step on one another. Woop, you made a system where an assumed safe operation "adding something" breaks stuff.
This kind of thing is a nightmare if you already have queues and a database because why would you stand up another thing but if you had none of that to begin with then yeah... makes total sense.
Like this is the reality of ops and running something "production ready" it's a lot of big ole complex HA platform so that you can run your 5 lines of code and not have to worry about any of the hard problems like availability, retrying, resource contention, timing, data loss, locking.
My solution integrated with current tech, didn't have any of the problems outlined in those bullet points, and has required 0 maintenance and or updates for over 4 years. I don't want to even think about how many Kafka and couch releases there have been since then.
It's a HUGE problem.. the devs who do it also tend to be the exact opposite of K.I.S.S. They're always looking for the most complicated and obtuse way of doing anything to make themselves look smart.
Or, you could stich it together from new tech.
It wasn't long ago I was surprised to discover a Hadoop installation at my current professional setting, complete with Zookeepers and everything. After some sleuthwork (which wasn't that bad as they keep everything in git, in a future when everything is api calls to some Kuberentes clusters we're all smoked) it turned out that all it does it move data between two systems.
Literally. It could be replaced with a periodic rsync run. Instead someone has to maintain and monitor a whole suite of software, prepare, test and run upgrades and so on.
The real solution is to systematically address technical debt as part of your development process. Did that feature that someone swore up and down would be your money maker three years ago not pan out? Delete. Is that abstraction leaky and not that useful? Delete. Are there code paths that never get used and are more or less untested? Delete or assert they never happen.
"Delete" is the solution, but that is a tool they don't want to use, because every feature somewhere is used by some paying customer, who will complain loudly - and may even move to a competitor. Often the pain of updating and the pain of migrating are similar.
Hey, encapsulation is a decent mitigation to complexity issues.
A enterprise NEEDS ALL.
I work in this niche (mostly for small companies), and what I see for this past +25 years is that even the most "small" of all companies have a HUGE array of needs, apps, data to work, laws to comply, demands of suppliers AND their customers (that RECURSIVELY add bloat!), both ancient, current, modern, and next-gen tech in their stacks.
Is like a developer that instead of being only "LAMP" + editor, is one that:
- Support Mysql, Postgres, Sql Server, Sqlite, FoxPro, Firebase, DBISAM, redis and in terrible days mongo
- N-variations of csv and alikes, json, toml, yalm, .ini, binaries formats...
- Talk to cobol, web services (SOAP, JSON, RPC, GraphQL), pipes of commands
- Deal with python, java, swift, obj-c, .net, c, c++, c#,, f#, go, rust, js, typescript, css, html
- Test on chrome, safari, firefox, ie (OLD ie)
- Windows, Linux, Mac, iOS, Android, Web
- bash, cmd, powershell
- VS Studio toolchain, LLVM toolchain, OSX toolchain, Android toolchain
- Docker, normal deploy, CI
- Sublime, VS Code, Xcode, IntelliJ, Notepad++, Notepad (as-is), nano and in very bad days, vim
- Have Hardware: M1 laptop, Lenovo Windows machine, iPhone, iPad, Android phone
WHO can be the lunatic that deals with all of this?
ME.
(if you wanna understand why I so grumpy about C, C++, Js, Android, the state of shells, terminals, rdbms, nosql, now you know)
I don't mean I fully deep dive to ALL of this, but I need to at least HAVE it or install, or touch it here and there. Is like I say:
Not matter how SMALL a "company" is, it
NEEDS ALL.
I have seen the opposite: Test only on IE6, and when it turns out that the stuff doesn't work on any other browser and it's too much work to fix everything, make IE6 the only supported browser.
But I need to install that stuff so I can run the integration, do the tests, see how they work, add a little code of it, etc.
For example, I need to install https://www.elevatesoft.com/products?category=dbisam because just ONE of my customer use it.
Then I need to add ODBC to OSX.
Then I need to install FreePascal, and make some DLL on it so I can decode just ONE field that whole depend on the binary representation that exist there. More fun? That field is where is store the "price" of the product.
Why the heck that developers decide to dump unportable binary, from a certain version of FPC, on that field, hell I know...
Edit: There can be risks when an ERP supplier fundamentally fails to understand your business model - SAP managed to do this with a former employer of mine which led them to be shown the door.
There's flexibility in ERP's, for sure, but it's not necessarily accessible to the people that use it.
I once had to implement a "screen" to physically divert a list of devices with certain serial numbers. Basically: send a notification whenever one of these devices showed up at a loading dock.
The most logical way to do it, which I naively considered first, was to set-up something in one of the ERP modules which is specifically focused on "material movement". After some tedious email exchanges and a phone call it turned out that it was, in fact, "possible". The catch was that it would take WEEKS and involve an expensive Oracle consultant requisition.
I put an end to that and instead had a junior implement the solution in a downstream application (which we actually develop, own and control), in about an hour. It was worth it even though it meant the stuff left the loading dock and ingressed into the building, requiring some additional physical "moving-around" hassles.
I would actually go further and say that enterprise ERPs are full-fledged development environments, often with custom languages and code editors. Quality varies wildly, though.
The fact that some of them are able to handle the processes of some companies out of the box is almost accidental.
It's not all gain.
When an enterprise procurement office buys software, the question is "is there anything it does NOT do that will cause someone to fire me for having purchased it ?"
Like I worked on a big server side Java application and when offline became a thing one of the questions was "Does it support offline usage?". Obviously, since it's a gigantic Java server side application, the answer should be no. And there's no reason that a customer should want to run it or any application like it offline. But the question is there and if you answer "no" then you don't get the sale. So they built some half assed terrible bit of offline functionality that no one in their right mind should ever use. Now they can answer "yes" to the dumb question and get more sales.
Been there done that; there was this 3rd party software module "Y" that exchanged data packets over the network between "X" and "Z", supposedly doing complex operations, and my company was developing both X and Z. I was in the Z developers team. We had all protocols documentation, so although I didn't have the sources of that Y module, I could see that it was just passing around packets without performing any functions that couldn't be easily integrated in either X or Z, I mean really 2 hours of work in a government project that lasted years, so I asked about the opportunity to some colleagues who confirmed that getting rid of that module was a no-no because by contract we were forced to partner with that company, therefore we had to keep their module that essentially did nothing but passing packets (and money). I recall my immediate thought was "software bureaucracy", which probably boosted even further my decision to run only Open Source software wherever I can.
Edit to add: The article did also mention that accessibility is a must-have feature, though I can't remember now if that bit was there in the original. Sorry if it was.
Also, "accessibility" is not a single feature; different aspects of accessibility - for different needs - are quite different, unrelated, separate features.
What if that team then wants to hire a sixth person, and one of the qualified candidates is blind (or has some other disability that's relevant for software)? If accessibility isn't the default, it's too easy to pass on qualified candidates in a category where many struggle to find work.
It has to do everything with poor management of features and lack of leadership. Software _should_ be developed as features are needed. Average humans are absolute crap at predicting things like markets or what will actually be used in production. The translates to developers wasting tons of time on features that provide little value. Lack of leadership and communication of a clear business vision contributes to a panic mentality "if we don't do it now, we'll never get the chance to do it!" and edge cases are chased down, delaying the move to production.
I'm going to steal that from you and use it for a rainy day if you don't mind. :)
Next question
>Other baby outfits are meant for parents. They’re marked "Easy On, Easy Off" or some such, and they really mean it. Zippers aren't easy enough so they fasten using MAGNETS. A busy parent (i.e. a parent) can change an outfit in 5 seconds, one handed, before rushing to work.
>The point is, some products are sold directly to the end user, and are forced to prioritize usability. Other products are sold to an intermediary whose concerns are typically different from the user's needs. Such products don't HAVE to end up as unusable garbage, but usually do.
https://twitter.com/random_walker/status/1182635589604171776
Furthermore considering that some things will often be very complicated to do because of legal burdens it makes sense that one piece of software for doing invoicing handles all your invoicing needs across all markets you operate in. And suddenly when that happens it might be that you get a more complicated piece of software than if you bought 10 different pieces of software each supporting the standard needed for a particular market.
As someone that buys a lot of enterprise software and runs a lot of software tenders (particularly for enterprise ERP and WMS) - I would say picking software which matches user/organisational requirements is actually the most important thing (which is checking off the right features).
Absolutely this is more important than usability, which comes secondary to meeting the requirements (what good is usability if I can't get it to do what I want?).
Most botched software tenders I have seen happened because the company purchasing the software wasn't clear on their requirements (i.e. the features they needed) and then bought a software which did not match what they required and then need to somehow just 'make it work'.
Big ticklists of generic features aren't useful though, but ultimately if you are purchasing an ERP and need to be able to put stock into bins within it, and it doesn't have this feature but it is really usable and built with really great architecture, it's still not going to work.
Then each customer wants their own specific crazy workflow in the product after they "rent" the software. An endless circle of bloat increasing. Just look at SAP (and Oracle).
Some of this is touched upon in the article.
And generally the most important feature for a manager to check off is the one never mentioned (at least on the customer-organization end) - "How does the Shiny Newness, Dog & Pony Show, and Buzzword Parade offered by this software make me feel about myself?"
In my experience this is largely due to PoC (or otherwise “temporary”) tools getting used long-term without refactoring, and growing further PoC features over time that compound the problem.
This also affects production services, if management let (or demand!) PoC code gets released before it is really ready.
Employers in the US can face consequences if they use software that doesn't have accessibility features, thus not complying with the ADA.[1] Clients of theirs can also sue.[2]
Some countries have multilingual requirements[3], not to mention the markets an enterprise loses out on by not having translations in dozens of other languages.
Enterprise software often has to be built and sold with the idea of scalability ingrained. Flexibility in scale here is where you get a lot of sales, e.g.: "WidgetSys can scale to support 1 million concurrent users". Some customers legitimately need that. This can also help deal with demand spikes, such as tax season, school registration "season", etc.
Some places have strict compliance requirements that don't make sense for most businesses. Most businesses probably don't care about FIPS-140-2, but some do because of who their customers are. Because of this, many pieces of enterprise software require incredibly fine-grained control over audit data.
Some require the ability to connect to LDAP, AD, and OAuth sources (sometimes all three at the same place).
Just this list represents features that can impact "bloat" but a "lean" app often doesn't have. This can get a business in trouble.
Now, the next question is obviously: Doesn't it make sense to have a user-specific build of the software. For example, if I don't need AD/LDAP access, can't I just get an OAuth only version? This doesn't work for a few reasons but let's assume it could. Now you have X versions * Y features worth of SKUs for your software.
It's also worth noting that while a lot of this stuff is huge on disk (relatively speaking) it often doesn't actually need to load and run all of that code. A lot of well-designed enterprise software is designed to essentially enable/disable functionality in a modular way because of it. They also tend to have many SKUs but segment it out along lines that make sense, e.g.: SQL Server Express edition is pared down SQL Server Standard missing a bunch of these features.
An easy way to look at this from the American perspective is to preface every feature tick box with "will the company be sued if we don't support ..." -- at least that's been my experience.
[1]: https://www.ada.gov/civil_penalties_2014.htm
[2]: https://www.nad.org/2016/09/06/the-nad-and-hulu-reach-agreem...
Er, wasn't it Donald Knuth? https://wiki.c2.com/?PrematureOptimization
Let me fix it.
https://ubiquity.acm.org/article.cfm?id=1513451
> Every programmer with a few years' experience or education has heard the phrase "premature optimization is the root of all evil." This famous quote by Sir Tony Hoare (popularized by Donald Knuth) has become a best practice among software engineers.
If only “capabilities” or “functionality” gets engineering budget, then improvements without a business sponsor don’t get done. It’s also easier to find funding for a project that has a great dollar benefit to one payer than an improvement with a broader and fuzzier purview. Sometimes this happens under the guise of “turning tech spending from fixed cost to variable.” (The irony is this usually increases costs)
The best orgs get around this by putting a tax on technology budgets. “No matter what you spend, we need one dollar out of 5 more to pay for then sins and debt of our predecessors.” Or call it Kaizen or Continuous Improvement if you need. If they don’t trust you to spend that money wisely, they shouldn’t trust you to build anything new either.
Case in point, I manage an org with a $200M budget. I have a budget line for a $90 software item for some VIP somewhere that has to be justified/validated annually by somebody. That $90 probably costs is $500.
But… We subscribe to office. Adding teams required zero effort, because of the bundling effect. Slack at the time was going through a PoC/vetting process, but why bother if I have a 80% product. The pitch was that Teams was “free”… although mysteriously the price of Office was revised upward.
Only part of software optimisation which should be focusing early should be cross cutting concerns like logging, monitoring, authentication, etc. Business logics should be separated from these and optimised only when required. Consuming more resource is better than rewriting business logics and fixing bugs.
Enterprise has embraced a cloud-first stragegy.
Suddenly, throwing hardware at a problem becomes throwing cash at the cloud.
Joe Armstrong (Erlang) always joked that if you wanted your program to run faster, just wait a few years and the computing power will increase. I found it to be a profound statement about shrugging about optimizing things needlessly unless it was absolutely necessary.
This describes Atlassian stuff just perfectly.
> As the code base gets large, bugs will creep in and become harder to fix.
And this is why automated tests - even if it's "just" end-to-end functional tests - are so important. But most managers aren't willing to give developers the extra budget do set up proper testcases...
Because of this leverage, technical debt quickly stacks up as everyone is policing themselves and others to not do the unanimously agreed upon 'right thing' to deliver a more cohesive software infrastructure; my god is cohesion the least likely property of enterprise stacks, at least in my experience, hence: all the local heroes, the mounds of manual testing and lack of automation, the 'everything-at-once-per-quarter' releases instead of CI, the distinct aggregation of 'flags' over parsing data structures, etc.
It is impossible that people are working in such circumstances and are just entirely unaware that things could be better; there is an immense amount of pressure from all sides to essentially 'shut up and dribble'. But that also facilities an environment where individuals or teams are just implementing whatever in their own little kingdom so long as it gets in before the sprint is done. My org alone has three different ways of doing the same exact thing amongst three different teams.
Engineering teams should be reviewing the product roadmap as an independent entity and deliberating on how to approach that collectively. A Director of Engineering is the tie breaker. Estimates come from the team, not individuals, or even individual teams.
> We are profoundly uninterested in claims that these measurements, of a few tiny programs, somehow define the relative performance of programming languages aka Which programming language is fastest?
Now, I challenge you to find a major bloated software where the main source of overhead is Python interpretation. IME it's always something else, like the surrounding UI framework.
The Office suite is written in C++ and is badly bloated, obviously not because of language execution overhead but because of technical debt, which if that's any indication recommends against using low-level languages.
In every piece of non-trivial software I’ve written in python, the main source of overhead has been Python interpretation.
I don’t think it’d be hard at all to meet your challenge.
Needless to say more staff seemed to correlate with more bloat.
Every customer replacing the legacy solutions is doing the work that the legacy org won’t do. The purpose of a solution is to build once the thing all your customers need. This dynamic is the exact inverse of that.
For examples, just click around the admin interfaces to O365, GApps, or AWS, and I'm sure you can find many annoying issues and/or bugs.
Have you looked at some the VM footprints of the text editors people are using?
In terms of using "bloated platforms" like Javascript and Python, I get a whiff of superiority from OP as there simply is no reason to build for size or speed unless it is part of the deliverable feature set. Nobody in their right mind would be writing serverless functions in C++/Rust or a Windows form to enter timesheet information (UX is about design, not platform, and is always seen as a secondary cost). If you are determined to use C++/Rust before a project has started then you're under the spell/threat of rockstar employees without a care for long term support.
The problematic Enterprise Applications I've worked on all had the same things in common, a bad maintenance plan or an expectation that the software will last decades without change. It was never, "this should have been written in xyz", its almost always that the domain knowledge has gone, and alongside it, the source code.
If you're in a business, expecting to exist in decades time, using a moving target to host your systems, like any OS, you better look at the long game, as well as the short, and factor in versioning, source control and inevitable bit rot. Its not about how old it looks, or how fast it could be.
Ultimately, there is a massive desire for businesses to offload development entirely via no-code platforms like PowerApps and absolutely no desire to make code that requires more expensive technical hires to maintain, or add more process to manage.
Finally, I've been coming across a lot of developers pining for the "old days" where you could change change things willy nilly and release it, without writing tests or having code reviews. These were the bad old days, and they're long gone. They got away with it because software was not as ubiquitous, and the internet wasn't around to spread 0-day vulnerabilities, and had very little oversight.