I think the analogy here is backwards. The better question is "how much would you prioritize a car that used only 0.05 liters per 100km over one that used 0.5? What about one that used only 0.005L?". I'd say that at that point, other factors like comfort, performance, base price, etc. become (relatively) much more important.
If basic computer operations like loading a webpage took minutes rather than seconds, I think there would be more general interest in improving performance. For now though, most users are happy-enough with the performance of most software, and other factors like aesthetics, ease-of-use, etc. are the main differentiators (admittedly feature bloat, ads, tracking, etc. are also a problem, but I think they're mostly orthogonal to under-the-hood performance).
These days, I think most users will lose more time and be more frustrated by poor UI design, accidental inputs, etc. than any performance characteristics of the software they use. Hence the complexity/performance overhead of using technologies that allow software to be easily iterated and expanded are justified, to my mind (though we should be mindful of technology that claims to improve our agility but really only adds complexity).
I'll prioritize the 0.005L per 100km car for sure. That means the car can be driven for all its expected lifetime (500k km) in a single tank of gas, filled up at the time of purchase! That means there is a huge opportunity to further optimize for many things in the system:
- The car no longer needs to have a hole on the side for filling up. A lot of pipes can be removed. Gas tank can be moved to a safer/closer location where it is used.
- The dashboard doesn't need a dedicated slot for showing the fuel gauge, more wirings and mechanical parts removed.
- No needs for huge exhaust and cooling systems, since the wasted energy is significantly reduce. No more pump, less vehicle weights...
Of course, that 0.005L car won't come earlier than a good electric car. However, if it's there, I'd totally prioritize it higher than other things you listed. I think people tend to underestimate how small efficiency improvements add up and enable exponential values to the system as a whole.
A UI where each interaction takes several seconds is poor UI design. I do lose most of my time and patience to poor UI design, including needless "improvements" every few iterations that break my workflow and have me relearn the UI.
I find the general state of interaction with the software I use on a daily basis to be piss poor, and over the last 20 or so years I have at best seen zero improvement on average, though if I was less charitable I'd say it has only gone downhill. Applications around the turn of the century were generally responsive, as far as I can remember.
I’m willing to bet that a significant percentage of my accidental inputs are due to UI latency.
I was forced to use a monitor at 30 fps for a few days due to a bad display setup. It made me realize how important 60 fps is. Even worse, try using an OS running in a VM for an extended period of time...
There are plenty of things that are 'good enough', but once users get used to something better they will never go back (if they have the choice, at least).
At my current place of employment we have plenty of average requests hitting 5-10 seconds and longer, you've got N+1 queries against the network, rather than the DB. As long as it's within 15 or 30 seconds nobody cares, they probably blame their 4G signal for it (especially in the UK where our mobile infrastructure is notoriously spotty, and entirely absent even within the middle of London). But since I work on those systems I'm upset and disappointed that I'm working on APIs that can take tens of seconds to respond.
The analogy is also not great because MPG is an established metric for fuel efficiency in cars. The higher the MPG the better.
I use webpages for most of the social networking platforms such as Facebook. I am left handed and scroll with my left thumb (left half of the screen). I have accidentally ‘liked’ people’s posts, sent accidental friend requests only because of this reason.
Guessing along with language selection, it might be helpful to have a selection of hand preference for mobile browsing.
I think for webpages it is the opposite: non-orthogonal in most cases.
If you disable your JS/Ad/...-blocker, and go to pages like Reddit, it is definitely slower and the CPU spikes. Even with a blocker, the page still does a thousand things in the first-party scripts (like tracking mouse movements and such) that slow everything down a lot.
Moreover, the same cost-equation that produces software that is much less efficient than it could be produces software that might be usable for it's purpose (barely) but is much more ugly, confusing, and buggy than it needs to be.
That equation is add the needed features, sell the software first, get lock in, milk it 'till it dies and move on. That's equation is locally cost-efficient. Locally, that wins and that produces the world we see every day.
Maybe, the lack of craftsmanship, the lack of doing one's activity well, is simply inevitable. Or maybe the race to the bottom is going to kill us - see the Boeing 737 Max as perhaps food for thought (not that software as such was to blame there but the quality issue was there).
Wait, are you implying they don't ? What world do you live in, and how do I join?
It does fill some other requirements that a regular car doesn't.
or... something.
The car analogy does remind me of one I read a while ago, comparing cars and their cost and performance with CPUs.
>And build times? Nobody thinks compiler that works minutes or even hours is a problem. What happened to “programmer’s time is more important”? Almost all compilers, pre- and post-processors add significant, sometimes disastrous time tax to your build without providing proportionally substantial benefits.
1/ What didn't seem to get mentioned was the speed to market. It's far worse to build the right thing no one wants, than to build the crappy thing that some people want a lot. As a result, it makes sense for people to leverage electron--but it has consequences for users down the line.
2/ Because we deal with orders of magnitude with software, it's not actually a good ROI to deal with things that are under 1x improvement on a human scale. So what made sense to optimize when computers were 300MHz doesn't make sense at all when computers are 1GHz, given a limited time and budget.
3/ Anecdotally (and others can nix or verify), what I hear from ex-Googlers is that no one gets credit for maintaining the existing software or trying to make it faster. The only way you get promoted is if you created a new project. So that's what people end up doing, and you get 4 or 5 versions of the same project that do the same thing, all not very well.
I agree that the suckage is a problem. But I think it's the structure of incentives in the environment that software is written that also needs to be addressed, not just the technical deficiencies of how we practice writing software, like how to maintain state.
It's interesting Chris Granger submitted this. I can see that the gears have been turning for him on this topic again.
I find it really interesting that no one in the future of programming/coding community has been able to really articulate or demonstrate what an "ideal" version of software engineering would be like. What would the perfect project look like both socially and technically? What would I gain and what would I give up to have that? Can you demonstrate it beyond the handpicked examples you'll start with? We definitely didn't get there.
It's much harder to create a clear narrative around the social aspects of engineering, but it's not impossible - we weren't talking about agile 20 years ago. The question is can we come up with a complete system that resonates enough with people to actually push behavior change through? Solving that is very different than building the next great language or framework. It requires starting a movement and capturing a belief that the community has in some actionable form.
I've been thinking a lot about all of this since we closed down Eve. I've also been working on a few things. :)
There's ways to develop working software, but not if it's all locked behind closed OSes and other bullshit.
Writing performant, clean, pure software is super appealing as a developer, so why don't I do something about the bloated software I write? I think a big part of it is it's hard to see the direct benefit from the very large amount of effort I'll have to put in.
Sure I can write that one thing from that one library that I use myself instead of pulling in the whole library. I might be faster, I might end up with a smaller binary, it might be more deterministic because I know exactly what it's doing. But it'll take a long time, might have a lot of bugs and forget about maintaining it. Then end of the day, do the people that use my software care that I put in the effort to do this? They probably won't even notice.
> While I do share the general sentiment, I do feel the need to point out that this exact page, a blog entry consisting mostly of just text, is also half the size of Windows 95 on my computer and includes 6MB of javascript, which is more code than there was in Linux 1.0. Linux at that point already contained drivers for various network interface controllers, hard drives, tape drives, disk drives, audio devices, user input devices and serial devices, 5 or 6 different filesystems, implementations of TCP, UDP, ICMP, IP, ARP, Ethernet and Unix Domain Sockets, a full software implementation of IEEE754 a MIDI sequencer/synthesizer and lots of other things.
>If you want to call people out, start with yourself. The web does not have to be like this, and in fact it is possible in 2018 to even have a website that does not include Google Analytics.
https://www.reddit.com/r/programming/comments/9go8ul/comment...
> Today’s egregiously bloated site becomes tomorrow’s typical page, and next year’s elegantly slim design.
Edit: found a link with the same story: https://www.folklore.org/StoryView.py?story=Saving_Lives.txt
The software world needs more of this kind of thinking. Not more arguments like "programmer's time is worth less than CPU time", which often fail to account for all externalities.
As it is, software can largely free ride on consumer resources.
Meh this is manager-speak for "saving human lives" which they definitely were not. They weren't saving anybody. I mean, there's argument that, in modern day, 2020, time away from the computer is more well-spent than on a computer; so a faster boot time is actually worse than a slower boot time. Faster boot time is less time with the family.
Good managers, like Steve Jobs was, are really good at motivating people using false narratives.
As I write this, I've been trying to get my Amazon seller account reactivated for more than a year, because their reactivation process is just... broken. Clicking any of the buttons, including the ones to contact customer support just take you back to the same page. Attempts to even try to tell someone usually put you in touch with a customer service agent halfway across the world who has no clue what you're talking about and doesn't care; even if they did care, they'd have no way to actually forward your message along to the team that might be able to spend the 20 minutes it might take to fix the issue.
The "barely working" thing is even more common. I feel like we've gotten used to everything just being so barely functional that it isn't even a disadvantage for companies anymore. We usually don't have much of an alternative place to take our business.
I don't mean to shit on Khan Academy exactly because it's not like I'm paying for it, but those lessons may as well not exist for a 4 year old with an interface that poor. It was bad enough that more than half my time intervening wasn't to help him with the content, nor to teach him how to use the interface, but to save him from the interface.
This is utterly typical, too. We just get so used to working around bullshit like this, and we're so good at it and usually intuit why it's happening, that we don't notice that it's constant, especially on the web.
* Measure whether the service you provide is actually working the way your customers expect.
(Not just "did my server send back an http 200 response", not just "did my load balancer send back an http 200", not just "did my UI record that it handled some data", but actually measure: did this thing do what users expect? How many times, when someone tried to get something done with your product, did it work and they got it done?)
* Sanity-check your metrics.
(At a regular cadence, go listen for user feedback, watch them use your product, listen to them, and see whether you are actually measuring the things that are obviously causing pain for your users.)
* Start measuring whether the thing works before you launch the product.
(The first time you say "OK, this is silently failing for some people, and it's going to take me a week to bolt on instrumentation to figure out how bad it is", should be the last time.)
* Keep a ranked list of the things that are working the least well for customers the most often.
(Doesn't have to be perfect, but just the process of having product & business & engineering people looking at the same ranked list of quality problems, and helping them reason about how bad each one is for customers, goes a long way.)
https://www.youtube.com/watch?v=pW-SOdj4Kkk
His point is basically that there have been times in history where the people who were the creative force behind our technology die off without transferring that knowledge to someone else, and we're left running on inertia for a while before things really start to regress, and there are signs that we may be going through that kind of moment right now.
I can't verify these claims, but it's an interesting thing to think about.
we need a solution to this mess. so far i've seen popups (of all things) letting users know they should disable the ad blocking. but that's not a solution. ideally websites should not break when ad blockers are enabled, but i've seen sites where their core product depends on ad blocking being disabled. strange/chaotic times we live in.
That's because the shotgun approach(sick 40 developers on a single problem idc how they dole out the workload) works well for most low stakes, non-safety-critical software.
So like a reactivation portal for your Amazon seller account is very low stakes. But Boeing treating the 737-MAX the same way, would be(and was) a very bad idea.
Because that low-stakes approach is extremely bug prone.
https://tonsky.me/blog/good-times-weak-men/
Another take: rewrites and rehashes tend to be bad because they are not exciting for programmers. Everything you re about to write is predictable, nothing looks Clearly better and it just feels forced. First versions of anything are exciting, the possibilities are endless, and even if the choices along the path are suboptimal, they are willing to make it work right.
Nobody has any fucking idea what’s going on in their react projects. I work with incredibly bright people and not a single one can explain accurately what happens when you press a button. On the way to solving UI consistency it actually made it impossible for anyone to reason about what’s happening on the screen, and bugs like the ones shown simply pop up in random places, due to the complete lack of visibility into the system. No, the debug tooling is not enough. I’m really looking forward to whatever next thing becomes popular and replaces this shit show.
Add to this the modern way of being able to hotfix or update features and you will set an even lower bar for working software.
The reason an iPod didn't release with a broken music player is that back then forcing users to just update their app/OS was too big an ask. You shipped complete products.
Now a company like Apple even prides itself by releasing phone hardware with missing software features: Deep Fusion released months after the newest iPhone was released.
Software delivery became faster and it is being abused. It is not only being used to ship fixes and complete new features, but it is being used to ship incomplete software that will be fixed later.
As a final sidenote while I'm whining about Apple: as a consultant in the devops field with an emphasis on CI/CD, the relative difficulty of using macOS in a CI/CD pipeline makes me believe that Apple has a terrible time testing its software. This is pure speculation based on how my experience. A pure Apple shop has probably solved many of the problems and hiccups we might run into, but that's why I used the term "relatively difficult".
Anecdotally, a lot of rewrites happen for the wrong reasons, usually NIH or churn. The key to a good rewrite is understanding the current system really well, without that its very hard to work with it let alone replace it.
> iOS 11 dropped support for 32-bit apps. That means if the developer isn’t around at the time of the iOS 11 release or isn’t willing to go back and update a once-perfectly-fine app, chances are you won’t be seeing their app ever again.
but then he also says:
> To have a healthy ecosystem you need to go back and revisit. You need to occasionally throw stuff away and replace it with better stuff.
So which is it? If you want to replace stuff with something better, that means the old stuff won't work anymore... or, it will work by placing a translation/emulation layer around it, which he describes as:
> We put virtual machines inside Linux, and then we put Docker inside virtual machines, simply because nobody was able to clean up the mess that most programs, languages and their environment produce. We cover shit with blankets just not to deal with it.
Seems like he wants it both ways.
I don't quite know what's going on inside Apple, but it doesn't feel like they're choosing which features to remove in a particularly thoughtful way.
---
Twenty years ago, Apple's flagship platform was called Mac OS (Mac OS ≠ macOS), and it sucked beyond repair. So Apple shifted to a completely different platform, which they dubbed Mac OS X. A slow and clunky virtualization layer was added for running "classic" Mac OS software, but it was built to be temporary, not a normal means of operation.
For anyone invested in the Mac OS platform at the time, this must have really sucked. But what's important is that Apple made the transition once! They realized that a clean break was essential, and they did it, and we've been on OS X ever since. There's a 16-year-old OS X app called Audio Slicer which I still use regularly in High Sierra. It would break if I updated to Catalina, but, therein lies my problem with today's Apple.
If you really need to make a clean break, fine, go ahead! It will be painful, but we'd best get it over with.
But that shouldn't happen more than once every couple decades, and even less as we get collectively more experienced at writing software.
Even with our priorities in order, there will still be contentious, hard choices (to deprecate so-and-so or not; to sacrifice a capability for consistency of interface or not), but the author's point is that our priorities are not in order in the first place, so the decisions we make end up being arbitrary at best, and harmful/driven by bad motivations at worst.
Before we fix performance, bloat, etc, we really need to make software reliable.
Apple have totally forgotten how to test and assure software against what appear to be even stupid bugs. macOS Catalina has been fraught with issues ranging from the minor to the ridiculous. Clearly nobody even bothered to test whether the Touch Bar "Spaces" mode on the MacBook Pro 16" actually works properly before shipping the thing. Software updates sometimes just stop downloading midway through, the Mail.app just appears over the top of whatever I'm doing seemingly at random and Music.app frequently likes to forget that I'm associated with an iTunes account.
Microsoft are really no better - Windows 10 continues to be slow on modest hardware and obvious and ridiculous bugs continue to persist through feature releases, e.g. the search bar often can't find things that are in the Start menu!
My question is who is testing this stuff?
The reason for unreliability is probably the same reason why things are slow: developers and project managers who don't care about the users and/or who are not incentivized to improve performance and reliability.
If you think that "not caring about the users" is too harsh, consider that users do suffer from e.g. unoptimized web pages or apps that use mobile data in obscene quantities. This has a direct consequence on people's wallets or loss of connectivity which is a huge pain.
As developers we can all try to instill "caring about the users" into our team's priorities.
I can comfortably play games, watch 4K videos, but not scroll web pages?
I think this is one of the more important points that the article tries to get across, although it's implicit: while the peak of what's possible with computing has improved, the average hasn't --- and may have gotten worse. This is the point that everyone pointing at language benchmarks, compiler/optimisation, and hardware improvements fail to see. All the "Java/.NET is not slow/bloated" articles exemplify this. They think that, just because it's possible for X to be faster, it always will be, when the reality couldn't be further from that.
Speaking of bloat, it's funny to see the author using Google's apps and Android as an example, when Google has recently outdone itself with a 400MB(!) web page that purports to show off its "best designs of 2019": https://news.ycombinator.com/item?id=21916740
Where I differ a bit from your take: Languages and platforms that target high performance are providing application developers an elevated performance ceiling that allows them the luxury to use CPU capacity as they see fit. Application developers using high-performance platforms may then elect to make their application high-performance as well, yielding a truly high-performance final product, or they may elect to be spendthrifts with CPU time, yielding something middling on performance. And yes, a truly wasteful developer can indeed make even a high-performance platform yield something low-performance.
What benchmarks and the resulting friendly competitiveness help us avoid is a different and worse scenario. When we select a language or platform with a very low performance ceiling, application developers continuously struggle for performance wins. The high water mark for performance starts out low, as illustrated by how much time is spent in order to accomplish trivial tasks (e.g., displaying "hello world"). Then further CPU capacity is lost as we add functionality, as more cycles are wasted with each additional call to the framework's or platform's libraries. When we select a low-performance platform, we have eliminated even the possibility of yielding a high-performance final product. And that, in my opinion, illustrates the underlying problem: not considering performance at key junctures in your product's definition, such as when selecting platform and framework, has an unshakeable performance impact on your application, thereby pulling the average downward, keeping those peaks as exceptions rather than the rule.
Probably because a browser like FF has the goal to load and display arbitrary dynamic content in realtime like a reddit infinite scroll with various 4k videos and ad bullshit, whereas the game has the goal to render a known, tested number of pre-downloaded assets in realtime.
Also on shitty pages the goal is different-- load a bunch of arbitrary adware programs and content that the user doesn't want, and only after that display the thing they want to read.
Also, you can click a link somewhere in your scrolling that opens a new, shitty page where you repeat the same crazy number of net connections, parsing ad bullshit, and incidentally rendering the text that the user wants to read.
If you want to compare fairly, imaging a game character entering a cave and immediately switching to a different character like Spiderman and inheriting all the physics and stuff from that newly loaded game. At that point the bulk of your gameplay is going to be loading new assets and you're back to the same responsiveness problems of the shitty web.
Edit: clarification
As a web developer, sending an 8 KB JSON response is no problem. That's nice and light. In a networked action game, that's absurd. First, (hypothetical network programmer talking here) we're going to use UDP and write our own network layer on top of it to provide reliability and ordering for packets when we need it. We're going to define a compact binary format. Your character's position takes 96 bits in memory (float x, y, z); we'll start by reducing that to 18 bits per component, and we'll drop the z if you haven't jumped. Then we'll delta compress them vs the previous frame. Etc.
Really, what's happening is things are getting optimized as much as they need to be. If your game is running at 10 fps, it's going to get optimized. When it's hitting 60+ fps on all target platforms, developers stop optimizing, even if it could potentially be faster. Same for Reddit; it's fast enough for most users.
It's not the fault of Firefox that Reddit's new UI is pathetically slow. It's the Reddit's implementation of their UI itself which is total garbage.
And given that people do write fast, complex, real-time games in JavaScript for the browser, gamedev absolutely becomes a valid reference point for the possible performance of any individual page.
Say hello to shaders.
Think about all the gas that is saved because people don’t have to drive to the library, all the plane trips saved by video conferencing, all the photo film, all the sheets of paper in file cabinets, all the letters being sent as emails, all the mail order catalogues, ... you get the idea.
Does anybody know of a comprehensive study on this?
The things I'm doing on my phone today are not fundamentally different than what I was doing ten years ago. And yet, I had to buy a new phone.
1. The environmental cost of ineffective software is negligible, when compared to Bitcoin mining or other forms of hardware planned obsolescence.
2. By using more efficient software, surely, you can save a lot of CPU cycles, and it can improve the energy efficiency of some specific workloads under some particular scenarios. However, on a general-purpose PC, the desire for performance is unlimited, the CPU cycles saved in one way will only be consumed in other ways, and in the end, the total CPU cycles used remain a constant.
Running programs on a PC is like buying things, when you have a fixed budget but everything is cheaper, often people will just buy more. For example, I only start closing webpages when my browser becomes unacceptably slow, but if you make every webpage use 50% less system resource, I'll simply open 2x more webpages simultaneously. LED lightning is another example, while I think the overall effect is a reduction of energy uses, but in some cases, I think it actually makes people to install more lightnings, such as those outdoor billboards.
This is called the Jevons paradox [0].
For PCs, certainly, as I previously stated, in specific workloads under some particular scenarios, I totally agree that there are cases that energy use can be reduced (e.g. faster system update), but I don't think it helps much in the grand scheme of things.
if you haven’t seen it already, you’d probably be interested in the below talk by chuck moore, inventor of forth.
But the vast majority of software is one-off stuff. It makes no sense to optimize it for performance instead of features, development time, correctness, ease of use, etc.
"An Android system with no apps takes up almost 6 GB. Just think for a second about how obscenely HUGE that number is. What’s in there, HD movies? I guess it’s basically code: kernel, drivers. Some string and resources too, sure, but those can’t be big. So, how many drivers do you need for a phone?
Windows 95 was 30MB. Today we have web pages heavier than that!
Windows 10 is 4GB, which is 133 times as big. But is it 133 times as superior? I mean, functionally they are basically the same. Yes, we have Cortana, but I doubt it takes 3970 MB. But whatever Windows 10 is, is Android really 150% of that?"
My favorite line: "Windows 95 was 30MB. Today we have web pages heavier than that!"
If there's a new saying for 2020, it shouldn't be that "hindsight is 2020"... <g>
Also... each web page should come with a non-closable pop-up box that says "Would you like to download a free entire OS with your web page?", and offers the following "choices":
"[Yes] [Yes] [Cancel (Yes, Do It Anyway!)]". <g>
So what is in these huge downloads? Layers upon layers of virtual machines?
I remember once reading that IBM was going to implement an XML parser in assembler and people were like "Why? If speed is needed then you shouldn't use XML anyway." I thought that concern was invalid because these days XML ( or JSON ) is really non-negotiable in many scenarios.
One idea that I've been thinking about lately is some kind of neural network enabled compiler and/or optimizer. I have heard that in the javascript world they have something called the "tree shaking" algorithm where they run the test suite, remove dependencies that don't seem to be necessary and repeat until they are getting test failures. I'm thinking why not train a LSTM to take in http requests and generate the http response? Of course sometime the request would lead to some sql, which you could then execute and feed the results back into the LSTM until it output a http response. Then try using a smaller network until something like your registration flow, or a simple content management system was just a bunch of floating point numbers in some matrices saved off to disk.
Why? With responses generated according to what? Are you really just suggesting using neural networks in the compiler's optimiser?
> Then try using a smaller network until something like your registration flow, or a simple content management system was just a bunch of floating point numbers in some matrices saved off to disk.
Why? What's the advantage over just building software?
Imagine how bad it would be if not!
I can’t remember who it was, but the idea always stuck with me.
Anyway, I agree that we should test our applications with the same hardware (and internet speed) of our average user. Very few people use a computer as good as a software engineer’s. :)
Since then, I've never worked in a team that would be so dedicated to perfomance. May be because that was a team that worked on a product with over 100 million installs that had all major features already developed - everybody else is too busy trying to figure out a market fit.
But I believe that's not the problem here, or at least not so much as the business being very impatient and just never respecting objections from the programmers that the quality of the software suffers due to far too tight deadlines.
I still manage to crank some terrible solutions, though. :P
These kind of specs are incredible for a high-end desktop machine from 15 years ago that would be good for almost anything — gaming, browsing, etc... What happened to software (Linux & Friends, Firefox, etc) that renders hardware obsolete so quickly? Is it just purposeful optimization that uses more RAM to benefit performance elsewhere, or is it truly this disenchantment?
[1] "In any bureaucracy, the people devoted to the benefit of the bureaucracy itself always get in control and those dedicated to the goals the bureaucracy is supposed to accomplish have less and less influence, and sometimes are eliminated entirely"
[1] https://en.wikipedia.org/wiki/Jerry_Pournelle#Pournelle%27s_...
For instance, I started noticing that a lot of the code I've written or worked with in many projects have a particular flavor to it. Pieces that take some data, repackage it, and pass it on to different code that does essentially the same - all arranged in a structure that's supposed to reflect some shared, abstract understanding of the problem. I've started to call this type of code "bureaucracy", and I see it as something to be kept in check.
Try planning a route in an unfamiliar area with this slow an UI when you are standing outside and there's no place to sit and rest, and you need to click around on a bunch of stops just to see what busses are going through a stop and where they are heading.
So yes, optimization is still important.
We replaced the glorious and easily iterated and expanded google maps app with a photograph of a public transport map, and we could get an answer of how to get from any A to any B within seconds of looking at the map without typing or searching or waiting for anything.
Which also shows that, sometimes, slow software is less than useless.
Even my e-book reader Linux port boots to UI in ~2s.
It really is just bloat and lack of care.
But on the other hand, a lot of web content does need to be faster. Gmail has somehow gotten so much slower to load over time. And every time I visit a newspaper/magazine website I am aghast at how bloated they are. Does that mean nodejs is inherently bad, no, but it does mean people should try to optimize noticeably terrible performance that actually degrades UX.
And as it's surrounded by numerous such "slightly inefficient but efficient from some other perspective" interactions overall efficiency dies a death of a thousand cuts.
When you actively start paying attention to it and comparing it to little examples of what could be you start noticing how utterly garbage everything is. Even at the base level in things that have a massive userbase and that are used not just once in a while but constantly it's disgusting trash. I look around at the company i work at. They're all using it and probably not noticing at all but the fucking file explorer in Windows is slow as fuck as is countless other elements and interactions of it. The companies website so simple in it's content and functionality and is made by a webdev agency but is a bloated mess that takes a while to get to....a logo that shows whilst it continues to load. The software my coworker wrote is small scale and he said i was wasting a lot of my time making some small action faster not recognising that it's been used many thousand times a day every day for more than 10 years now.
There's way too little moral panic
In this day of "Agile" development, as long as something's working during UAT, that's all that's needed for sales and consumers.
Webdev, IME, is an example where the ecosystem has facilitated bloated websites. I've worked with developers who throw any library they can just for basic things because they don't have a need to try to optimise. The meme of using jQuery for everything when it came out has just been replaced by other frameworks. I find it often depends on developers who really want to work on something and take pride in it vs those who just need something on their CV or got hired by following a few tutorials on the web but not understanding what they wrote (which, to me, signifies a hiring problem in the company). During code reviews, I encourage leads to keep calling up hacky code to a point where the developer will just start writing it properly the first time round. As developers, I feel we should be aware of not creating selfish software which hogs memory from other software or requires huge data downloads for mobile users (whenever doable). Possibly a naive ideal but if it's a byproduct of developing fast software for my end users, I think that's a win-win.
The guy is most definitely at least a genius.
They are great examples for his overall point though. It probably would've been better just to leave out the genius bit and talk about them as folks proving it can be done.
For those who don't want to dig, one example was Ember.js, which has a dependency called "glimmer" which makes up ~95% of the code size. The author looked into glimmer and found that it had the entirety of the Encyclopedia of Britannica's "G" section to include a definition of "glimmer" in their help menu.
And that wasn't even the most ridiculous example.
It's shameful that it's gotten this bad; but when you look at what people are expected of in the current climate it makes sense that this would happen.
* Horrendously short deadlines for enterprise CRUD (and the "frameworks" that support it)
* REUSE REUSE REUSE THIS REFUSE (few seem to know how to read source code before installing the dependency)
* "Not paying me enough for that shit"
* "We can't rewrite, we put 20 years into this codebase"
* Even our languages are shit, JS (despite it's usefulness) has undefined behavior as a feature.
* [among many others I'm sure you could think of]
It's toxic, corps incentivize lazy quick work that won't hold up in the long run, but they are too stupid to realize that. Though i blame more so the sycophant that just silently nods and does the work without a sliver of conscience telling them that "this is wrong".Civility has a lesser place in efficiency then what we have now; you can't make a decent product without bashing a few skulls (figuratively ofc)
Lastly, don't be afraid to reinvent the wheel if your wheel is better than mine.
That is, you can still write small and light software. It takes the same amount of time as it always did, not much changed.
The troubling part is that the proliferation of bloated software is steadily establishing a new status quo - the software is now _expected_ to be big and heavy in order to be proper. Doubly so for the enterprise-y software. Bloat is becoming a sign of maturity and robustness.
If you "simply blocked all ads" the people making the pages wouldn't have the income which they maintain the pages with.
How smart do you have to be to understand that?
>We haven’t seen new OS kernels in what, 25 years? It’s just too complex to simply rewrite by now. Browsers are so full of edge cases and historical precedents by now that nobody dares to write layout engine from scratch.
Well, there's Fucsia, speaking of new kernels. And Mozilla is doing exactly that, written a new layout engine from scratch (plus a language to write it in).
(I agree with the general sentiment of the post, but the examples are often shoddy)
One upon a time, adverts were just cross-linked GIF images. No iframes, no Flash, no javascript, no cookies, just images. Easy for rendering engines to show, no need to call script interpreters or to add boatloads of rubbish into the DOM. No real performance hit beyond downloading the image initially.
I would quite happily return to that world.
That's how we get deliberately slow pages, articles split over multiple page loads, image slideshows and deliberately slow pages (because time spent on your site is time not spent on the competing websites).
On the other hand we have pages like HN, paid for(?) using other means and built to be perfectly usable and fast. Or some CSEs where revenue comes from affiliates and CPC fees instead of ads, so they try to keep things fast too. Then we have news sites with paywalls and hopefully some day better engineered UI to read those news.
People should just vote with their pockets for a better user experience, IMO.
I agree with regards to ads specifically; that's why I don't use an ad blocker.
But the problem is so much greater than ads. Ads aren't what's slowing down the new gmail, or Slack.
All systems - biological, physical, meta-physical are built on layers that once deep enough are pretty well cemented in. Hindsight is 20/20 and though we know if the foundations were different things would be better - they won't actually change until the energy gain exceeds the energy cost of uprooting everything to make the change.
Just saying this problem isn't exclusive to software, but also laryngeal nerves in giraffes, x86, and Esperanto.
An increasing number of providers (for example databases) charge per server, per core, or another hardware-usage metric. The more hardware, the more revenue for them as they make about 90% margin for every new machine. There is a high incentive to get users to need more machines on their "managed cloud".
Vendors could try to improve their software so it requires twice as little CPU. But why bother since this would result in 2x cost in revenue? It makes more sense to focus on horizontal scalability than on core efficiency so users can keep on adding machines to their cluster over time.
If software is running at 1% of the maximum performance as suggested in the article, a 1% improvement could reduce costs by 50%. But I think none of the existing vendors will ever make the move as it conflicts against their own interest.
> I can comfortably play games, watch 4K videos, but not scroll web pages? How is that ok?
IMO the comparison should be buildings <-> fine-tuned libraries (i.e. video decoding algorithms), modern applications <-> cities.
Go to any city center in Europe. Urban planning a century ago was much more elegant and elaborate taking into consideration the city as a whole. Nowadays developers and investors often ignore important aspects, such as surrounding buildings, infrastructure, making cities inefficient for the people who actually live there.
Any system which has plenty of resources has to become inefficient. It's just that the Moore's law allows for pretty damn inefficiency.
The problem of modern urban design is that of unrestricted freedom for developers. They build mostly whatever the like, however they like, the city be damned as long as they make a profit off their construction. What's lacking here is care - personal care about the city, and centralized care in form of a city that can tell them they can either submit to the constraints of a more holistic planning, or take their business elsewhere.
Eh, no. Just a castle/wall, shops around in the main St's, a market in the middle, and expanding chaotically from that.
Since the current practices are so inefficient, users are having to pay for this from pocket as hardware expenses. Buy a $1000 smartphone, to get the same experience as last-year's.
A different stack (don't need to reinvent the universe), can be branded as brutally efficient, uses slim hardware, and strict engineering practices to provide a much better experience at a fraction of the price (1/5 possible.)
I believe nobody is doing that yet since there are two main barriers:
1- Risk of no-market (I think this might be proven unfounded, given the current trend in price hikes)
2- Capital investment necessary to get started (But this also can be solved given the obvious appetite for the next-big-thing money being poured left and right, without anything catching on yet)
The argument is incomplete.
The correct question (to maintain the analogy) is:
"Would you buy a car if it eats 1000 liters per 100 kilometers, but that doesn't affect you at all (you still get to where you want to be fast enough), and the time to manufacture and cost to buy it is much lower than would be possible with more efficient car that used 10 liters per 100 km?"
The answer to which would be yes. A software that does a task we do once a day or so at 1 second vs 0.2 seconds doesn't cost us money (and even the environmental impact is small).
Using a real example: in a certain company that wife worked in, there was a task that - every couple of days - would be repeatedly done by multiple office workers for several hours. I was visiting that place once, and since I had to wait for my wife, I was asked to give them a hand with that task. Growing frustrated, I very quickly located a hidden option for batch processing they didn't know anything about. It solved the entire task in couple of minutes. By finding that option, I freed several man-days a month for that company. Time that can be used for other tasks (or even goofing off).
This example sticks with me because these days, a lot of work is done in front of computers, and every inefficiency there removes productivity from the economy. The way I see it, if you're developing software, and there is a possibility that that software will end up being used by someone in their job, you owe it to them to make it efficient; by ignoring efficiency, you'll be robbing such people of their life and mental health, and their employers of potential profit.
I can deal with a 1 second delay to run a script. I can deal with a 20 second delay to launch a large program. I resent dealing with hundreds of 1 second lags every day for tasks which didn't used to have any lags ten years ago.
The culture of "developer time is most important", makes overall system performance someone else's problem, because "my program is fast enough when I measure it off the wall clock". But who's responsible to fix overall system performance, and how can they fix it? I think a lot of people would just upgrade their RAM, CPU or IO to solve the issue (create more petrol stations), rather than asking vendors to change the programming language, or to be more conservative on RAM.
And because there's costs to switching language stacks, people will stick to writing in the language they are comfortable in, so critical business systems get written in slow languages.
>That is not engineering. That’s just lazy programming
I don't believe that developers/programmers are all lazy. There are a lot that want to do a good job optimizing their code and making sure it performs well and is future proof as much as possible. I believe that budget limits and pressure from deadlines set by non-technical people forces even the good programmers to cut corners in order to deliver.
Somebody needs to write this software. Efficient, maintainable, and debugged are not the low-energy state.
That means it either needs to come from business, or from open-source hobbyists. Business doesn't see a competitive advantage in it -- even Apple, which has traditionally cared more about UX than anybody, just has to be better than their #2 competitor (and price of entry to "desktop OS" or "smartphone OS" is high so that list is short, and not changing on any relevant timescale). And the open-source world has never delivered well on the end-user experience side of things.
The sad truth is that users would rather have new software for $0 (paid by ads, or media subscriptions, or whatever) than pay what it truly costs to develop software.
My main hopes today are that the end of Moore's Law will force companies' hands, or that government will step in to regulate minimal quality, or that workers will organize so they can stand for quality behind a CBA. These all seem rather unlikely at this junction. The number of programmers in it for the paycheck far outweighs the number of people who care about simplicity.
Software is going to get much worse before it gets better.
It seems he does not realise that this is a satire piece, and seem to completely buy in to his view of the world instead of seeing things in a more nuanced way.
The software in nutshell is programmable transistors. Each CPU instruction is in effect just a convenient way to design a specific electronic circuitry. Even a trivial act of printing Hello World takes an astonishing amount of complexity when you take into account all kind of protoicals, APIs, driver codes, kernal code, fonts, rendering, graphics that gets executed in between. If you showed a computer printing Hello World on screen to someone from 1920s who knows how to build electronic circuits and primitive "display", they can estimate the amount of work that would be required to do that. Nothing has changed from 1920s to 2020s in terms of complexity to enable simple Hello World. A relatively simple program will easily exceed 30,000 low level components working together to achieve a goal in an intricate dance. Now think of large code bases with million lines of code... This is why software is hard, software is complex, software is messy and software is magic.
The exception would be games and embedded software, but even there, there's certain degrees of lazyness. For instance, games are very cpu/memory/gpu efficient, but they're almost always ridiculous when it comes to disk space usage. There's no reason that your average AAA game needs to take up 50GB other than that things which could address that aren't worth the fuss. (I'm thinking common demo-scene tricks like procedural textures/data/everything, aggressive compression schemes, reusing assets, etc.)
For that matter, one could for example take the Windows NT 4.0 source code, add in drivers for the necessary hardware, fix boot code, linking, etc to be compatible with late model computers, spruce up the UI with better font rendering, antialiasing, 24-bit color wallpaper, OpenGL rendering even--and in the end, you'd have something just as functional as Win 7/10 but at 1/4 of the bloat.
This sort of thing would be technically very easy to do. It's much easier than the status quo of continually reinventing the wheel. So why, oh why, is there this overpowering desire to continually throw out good code and replace it with heavier, more bloated junk, which doesn't really offer any real increase in functionality?
But something crucial is missing from this manifesto. As the author says, the problems don't exist because we can't solve them, but because no one ever takes an interest in solving them. We're all engineers, so why don't we just do some engineering and fix this bullshit? Well, because it wouldn't make any money.
Perplexingly and contradictorily, the profit motive drives both innovation and stasis, both growth and sprawl, and both efficiency and inefficiency. The problem is both technical and social, and probably so will be the solution.
But you don't have to go to those extremes to have good software performance.
Whenever I read posts like Software Disenchantment, I find myself agreeing with that philosophy. In other words, it’s probably by design. Of course this doesn’t account for the enormous waste of time and money that occurs during software development, but that doesn’t really affect my feelings on the matter.
Software today is in the snake oil era. "Secure", "privacy-friendly", "robust"...
We need the government to create the FDA of software, to protect the consumers against potentially harmful software, marketed using false advertisement.
We also need job applicants to be protected against predatory companies that hire people that want to do right by the customer, but are forced to produce rushed, poor quality stuff.
What frustrates me above all else is the trend of compile-to-javascript languages. IMO interpreted languages are great, of course some performance was sacrificed to get there - I think that was a fair trade-off because saving the developer from having to build the project is a HUGE advantage when developing (at least for my particular development flow)... So when I see people throwing away that massive advantage by adding an unnecessary compile step in order to get slightly better syntax or static typing (e.g. CoffeeScript or TypeScript), I find it deeply disturbing. Static typing can be a useful feature, but is it worth adding a compile step? Not by a long shot.
And the idea of transpiling a language into an interpreted language is just ridiculous in principle. We had an army of very smart people who invested a huge amount of time and effort into making efficient interpreters for certain languages but all that work is thrown away as soon as you add a build step.
And the stunning thing is that it's actually possible (easy even) to create excellent software with clean code without a build step (I've done it many times) but these simple, clean approaches are never popular. People want to use the complex approach that introduces a ton of new problems, delays and compatibility issues.
The problem is that when your software is built on top of a framework and/or uses X different web APIs etc then you often run into issues where a part of the system that you don't have control over causes performance issues and you don't have the expertise/time to profile it in order to fix it. So I think what's causing problems is that software has become a lot more about putting together frameworks, libraries and reusable components and when faced with such a complex system a programmer will often give up and say "there! it's as fast as I can make it without rewriting everything from scratch".
Therefore, the issue seems to be that programmers are building on top of other systems that they don't know enough about to use efficiently. The author does mention this issue in his article as well, but in a slightly derogatory fashion blaming programmers for bringing in dependencies they don't need.
I think if everybody had the time and ability to write everything from scratch like Jonathan Blow is doing with Jai then yes, things would be more efficient. It is far easier to profile and debug code you've written yourself. However, seeing how this isn't feasible for most projects, I think more focus should be put on better documentation of frameworks and libraries.
I think we need a guild. We need licensed software engineers.
Not every programmer needs to be one, just like not every engineer needs to be licensed, but there needs to be a licensed engineer on every team. And, of course, sometimes there doesn't need to be. But I sure wish there was the option.
And hell, bring back apprenticeships and mentoring with the guild. There is so much we could learn from the physical science engineering disciplines
However, the idea of working closely with Senior Engineers and learning from them is certainly something that I vehemently agree with. I've been fortunate to have had that opportunity.
As self driving cars enable us to fit far more cars on the road before causing the same level of congestion, people will start taking increasingly longer car rides for decreasingly valuable reasons, up until the roads are equally intolerable as they were before the innovation had occured.
Replace "self driving cars" with "faster hardware / more memory".
http://www.muppetlabs.com/~breadbox/software/tiny/teensy.htm...
Honestly things were way more sucky years ago. Your Windows computer crashing was just normal in the 90's. Rebooting and reinstalling par for the course. Getting Linux to install with drivers working seemed impossible unless you carefully chose hardware.
Or fewer features. Talk to an actual professional who uses spreadsheets all day long about switching from Excel to Google sheets. It's infuriating the infantilization of UIs and the "oh they'll never miss it" attitude.
but:
- economic incentives do not align with common sense
- landscape is fragmented, continually evolving and unstable
- we are all posers. we all have opinions and morals, look at others and criticise, but when it comes to our own work we are just like anyone else. we need money to live, so we just go with the flow
The whole point of Babel was to allow us to use the latest JavaScript syntax so that we wouldn't have to update our own source code when the new syntax finally became broadly supported by browsers and other engines.
IMO, the Babel project is a failure because:
- Babel itself is always being upgraded to support newer ECMAScript syntax, so people still need to upgrade their own code whether they use Babel or not. The only benefit is that using Babel allows you to use these features before other people.
- Instead of just worrying about how JavaScript syntax changes affect your code, with Babel, you also need to worry about how Babel upgrades will affect your code. The babel plugin dependencies often change and break over time (even if you don't change your own code) and you always have to support both ecosystems.
So when you consider the big picture, Babel doesn't save you from having to upgrade your own code (as per its original promise). When you evaluate the pros and cons, the cons greatly outnumber the pros:
Pros:
- You get to use the newest language features before other people.
Cons:
- You need to maintain your code for compatibility with two ecosystems instead of just one. Keeping up with both ECMAScript + Babel is a lot of work. I would even argue that staying up to date with Babel is more work because dependencies keep changing underneath your project.
- It forces you to use a build step so you lose the benefits/iteration speed that an interpreted language brings to your development flow.
- Adds a lot of bloat and unnecessary, hard-to-describe dependencies to your project which can open up security vulnerabilities and make your code more opaque and brittle.
An explanation of "why" does not explain "why this is acceptable".
I support the idealism of the article, but this quote is very accurate. Nobody wants to pay for quality software. Not your users, not your stakeholders, not even you!
And that's because the price of quality isn't just a little more expensive. It's not 1$ a year. It's exponentially more expensive. And some of those paid efforts will still end up slow and crappy in outcome, because of something else the article doesn't acknowledge:
Writing fast, efficient, simple, correct, full featured software is really really hard.
So not only is it expensive, it's also just plain difficult. Meaning it's not just about money and resources, but also time, time to experiment and fail.
If I think back to my engineering school days, the definition of "engineering" for my classmates in civil and electrical engineering was to look up well defined procedures and calculations from a book and apply them. No deep understanding required to design a bridge that didn't fall down or a circuit that didn't overheat.
What's the equivalent for software? Design patterns were a bust. SICP is for cultists. It's a huge void. There is hardly any such discipline as "software engineering" yet.
Well, where's your word processor buddy? Try to write one to achieve those goals -- and offer what people want today, including syntax highlighting, linting, auto-completions, etc, and come back to us...
Emacs could do that since 25 years ago, if not more.
Also you're conflating syntax highlighting/auto-competition with time to render an update to an input. You don't need the popout letting you know that 17 source files and 50k lines over it found "TextComplete()" is a valid auto-complete and should be turned blue in order to draw an "x" to the screen in response to a keystroke.
There's no reference for this in the article, and it caught my attention - anyone have any idea what the author is talking about here? Never heard of this before
But the best explanation for why this problem persists even in teams of proficient engineers that I have seen comes from the 2005 GDC Keynote from John Carmack [1].
> I can remember clearly in previous projects thinking [...] I've done a good job on all of this but wouldn't it be nice if I could sit back and really clean up the code, perfect the interfaces, and you know, just do a wonderful nice craftsman job [...] interestingly this project I've had the time to basically do that and I've come to the conclusion that it sort of sucks [...] there's a level of craftsman satisfaction that you get from trying to do what you do at an extreme level of quality and one thing that I found is that's not really my primary motivation [...] that's not what's really providing the value to your end-users [...] you can sit back and I am do a really nice polishing job it's kind of nice but it's not the point of maximum leverage and I found.
So, as others have mentioned here, there is a threshold where extra performance provides less value to the project than, say, an extra feature.
It seems that in some cases we have crossed that threshold and some software has become comically bloated. I attribute the reason why these are not solved to the same reasoning. Refactoring an existing project to reduce the bloat would take too much time and effort from one single developer, that could be used somewhere else, although in the end everyone would benefit from it. So you are better off adding stuff to the dumpster fire and moving on.
It's sort of a tragedy of the commons [2] of software performance.
Modern text editors have to deal with proportional font, right to left writing, anti-aliasing, and a big bag of Unicode related issues (Arabic is going to break all assumptions you have on language and text editing)
Fantastic read on this subject from few months ago:
You and me both buddy! 11 years here, you are not alone in this!
What I found helped me take my mind of this nagging feeling is to bring in a tool that is like a razer sharp brand new scalpel. That was Nim. It's readable, small, fast, and spits out tiny statically compiled binaries.
Find your scalpel and slice and dice. Back to basics. Purity.
https://docs.gitlab.com/ee/administration/operations/unicorn...
because I've had to deal with that a bit. N.B. the last sentence here:
One other thing that stands out in the log snippet above, taken from GitLab.com, is that ‘worker 4’ was serving requests for only 23 seconds. This is a normal value for our current GitLab.com setup and traffic.
For example, how many authors on medium proclaim themselves "Senior Software Engineers" but when you dig find they've got maybe 5 years experience doing web development, with no CS or engineering education. Maybe stuff like a 36hr "Web Development Bootcamp". Do people really not understand the definition of an engineer anymore?
From there they progress into the deeper parts of software. And create the atrocities to be found in the npm registry, which become dependencies of dependencies of dependencies that results in nightmares everytime one needs to navigate to a website.
If it would've been possible to see the background and education of the numerous critics here, what may we find? If I (the developer as described by the OP) am surrounded by people like me, and the world filled with people who think like me, and create things at a similar cognitive level as my peers, would I not misjudge the collective level of quality that I perceive to be acceptable? Smells like confirmation bias to me? Maybe a few others. Dunning-Kruger anyone?
In their defense, the marketing campaigns created by large corporations to turn the supply-demand (cost of employment) of employees in their favor, has I think been a big part of the problem. First the programmers, then the "Data Scientists", etc - the amount of disappointment and student debt being created as these people eventually realise they've been sold something that they're not suited for!
If we cannot critically look at our industry and admit our flaws, we cannot move forward as a collective.
I jumped straight from Android 2 to Android 8 development and was surprised myself, so did some investigation and have half an answer. The author is actually wrong here, on both looking different and additional functionality. However, the bloat is still far larger than it needs to be.
All the bloat comes from the AppCompat modules, which all the docs recommend to the point of it apparently being required if you don't know better.
AppCompat is for both supporting differing APIs and creating a consistent look and feel across different Android versions. Each Android version has its own visual design, which Google decided was a bad thing, opting to use AppCompat so the most recent designs (and in some regards design functionality like coloring the selection handles) were used in older versions of Android.
To do this however, it includes a crap-ton of images. The build scripts are supposed to remove the unused ones, but even with maximum trimming enabled it can only remove somewhere around 10%. There are hardcoded inclusion rules for some of the AppCompat java code, that no one's found a way to override, which in turn reference the images - so they get kept a well, even if your app never uses them.
As for differing APIs, notifications have changed massively over the years. The interface is so different you do actually need the AppCompat subset for notifications to target different Android versions (and that can be used separately from the rest of AppCompat), but there also have been a huge number of new features added to notifications - such as delay settings, shortcuts, icons, even full-on fancy designs, that didn't exist early on.
I'm calling this only half an answer because there's no apparent reason for some of the notification API changes, and the build scripts/AppCompat can certainly be significantly improved to remove more cruft. I have a sneaking suspicion that it's not done because this is low-hanging fruit for handing over signing keys to Google for their "optimized" builds...
I am a dependency skeptic. I think that you need them to do big stuff, but should probably avoid them for small stuff.
High-quality dependencies can have a drastic impact on the quality of your software, but so can low-quality dependencies.
I think we are at the tail-end of a "wild west" of dependencies.
When the dust settles, there will be a few really good, usable and stable dependencies, and a charnel pit, filled with the corpses of all the crap dependencies, and, unfortunately, the software that depended on them.
The only way out of this is to rethink the web. Which is a hard one to tackle.
Boot used to be a small number of seconds. Now (on the rare occasions I'm forced to actually boot/reboot) I start the machine and bugger off to make coffee while it does its thing. I don't know what takes so fucking long, but it's in the range of 'several minutes'.
Starting apps likewise. I just started up an infrequently used picture-editing app a little while ago,... up towards a full minute of 'loading this crap', 'loading this other crap', etc.
And let's not mention Atom (an Electron app unless I've misunderstood something) -- so laggy for some things that should be near-instant that I'm developing an active hate.
Alright: get the hell off my lawn now!
I also see the problem. I hate bloated websites. I hate all those little unnecessary fancy CSS animations, I use uMatrix and regularly have to figure out which of those blahblahcdn.com domains (subdomains are whitelisted) needs to be enabled to make it work.
But even those webapps can be made to be "fast enough". Developers just have to be very careful on the design from the beginning and try to use as little deppndencies as possible. Prefer simplicity and speed over fancy effects and unnecessary features no one asked for.
Thats....not what google play services is, like at all. https://developers.google.com/android/guides/overview
because we are going to make editors using JAVASCRIPT, which was never meant to be used this way
Also - resources that are essentially free to use (customer/users CPU cycles, storage and bandwidth) will be consumed.
We as consumers are paying the bills for it all in different ways(electricity, new gadgets, cloud costs etc).
It depresses me to no end too, but i am surprised in the least.
There were tiny groups trying to go frugal and solid. Remember suckless ? I forgot other names, there's also alan kay vpri project with ometa.
Maybe we should make a frugalconf. Everything 25fps on a rpi zero
Of course they’re going to accept that delay, even if adding a few million numbers together should really take less than a millisecond.
Not everything needs to be super efficient. Most things are tuned for production cost and time. Efficient code isn't going anywhere. Relax guys.
> Modern text editors have higher latency than 42-year-old Emacs. Text editors! What can be simpler? On each keystroke, all you have to do is update a tiny rectangular region and modern text editors can’t do that in 16ms. It’s a lot of time. A LOT. A 3D game can fill the whole screen with hundreds of thousands (!!!) of polygons in the same 16ms and also process input, recalculate the world and dynamically load/unload resources. How come?
Text is very complicated. Does your 42-year-old Emacs support Unicode? And not just accents, but whole different scripts?
See https://news.ycombinator.com/item?id=21105625 for some discussion and a good link about the complexities of rendering text.
No one's forcing you to use a library. But if you do they come with tradeoffs.
OK, things are slow and buggy. But we've got lots more things thanks to all the productivity we've gained from using libraries, etc. That means we collectively solve more problems for more people.
Purism is a nice idea, but ultimately probably not worth the effort until things become so bad that they are, and at that point become a differentiator. I mean, I don't care if a web page is twice the size of Windows 95 because my computer is way faster than a 486.
a) Lack of fundamental understanding of how computers work
b) Abstraction away from the bits, flops, shifts and pops
c) Quick sort
d) Electron.js
e) Ruby hipsters
f) Magical cloud computing
g) Software companies that know the cost of everything and the value of nothing
h) All of the above
All module folders should be at root level.
Need version 15 and version 16? Yep different folders under the root. Modulename_version, eg somelib_0.1.15 alongside somelib_0.1.16
It would allow clear identification of old versions and much less duplication. Less bloat. No way to have duplicate copies of the same library.
I feel like this is a feature, not a bug of software dev. The fact that you can push updates out so instantaneously allows you to work incredibly incrementally. You can make barely functional, inefficient things (shortcut hacks) just to make sure that your product is something that people actually use before focusing on optimization. If users are willing to put in the extra work to "refresh" the page, then certainly there's some real problem you are solving.
Gas car technology matured, gas got more expensive, and cars got more efficient. Moore's Law isn't going to go on forever. We are hitting limits in battery life. Software will tend to get more efficient over time.
> The demand upon a resource tends to expand to match the supply of the resource (If the price is zero).
Have cheap hardware, software will expand to use more of it.
Conway's Law
> I hope I’m not alone at this. I hope there are people out there who want to do the same. I’d appreciate if we at least start talking about how absurdly bad our current situation in the software industry is. And then we maybe figure out how to get out.
Okay, I'll respond, especially on the last part "how to get out".
For the problems and struggles in the OP, I've seen not all of them but too many and sympathize. Mostly though, I don't have those problems, and the main reason is in the simple, old advice in the KISS Principle where KISS abbreviates Keep it Simple Silly although the last S does not always abbreviate silly.
In particular my startup is a Web site and seems to avoid all of the problems in the OP. Some details:
(1) Bloated Unreliable Infrastructure?
My Web site is based on Microsoft's .NET with ASP.NET for Web pages and ADO.NET for access to SQL Server (relational database). The version of .NET I'm using is some 4.x. So far I've seen essentially no significant revisions or bugs.
For the software for my Web pages I just used one of the languages that comes with .NET. I wanted to select between two, the .NET version of C# and the .NET version of Visual Basic. As far as I can tell, both languages are plenty good ways to get to the .NET classes and make use of Microsoft's managed code, e.g., garbage collection, and their CLR (common language run time) code. And IIRC there is a source code translator that will convert either language to the other one, a point which suggests that really the two language are deeply equivalent.
I've written some C code off and on for 20+ years; mostly I remember the remark in the Kernighan and Ritchie book on C that the language has an "idiosyncratic syntax" or some such -- I agree. I never could understand the full generality of a declaration of a function -- IIRC there is some code in the book that helps parsing such a statement. I do remember that
i = ++j+++++k++
is legal; I don't want any such code in my startup; also my old tests showed that two compilers gave different results.
I find Visual Basic to have a less "idiosyncratic syntax" and a more traditional syntax closer to the original Basic and then Fortran, Algol, PL/I, Pascal, etc. So, my 100,000 lines of typing are in the .NET version of Visual Basic (VB).
For my Web site, part of the code is for a server process for some of the applied math computing. The file of the VB source code is 478,396 bytes long (the source code is awash in comments), and the EXE version is 94,720 bytes long. As far as I can tell, the code loads and runs right away. Looks nicely small and fast and not bloated or slow to me.
(2) A bloated IDE (integrated development environment).
I have no problems at all with IDEs. The reason is simple: I don't use one.
Instead of an IDE, I typed all my code, all 100,000 lines, into my favorite text editor, KEdit. It has a macro language, KEXX, a version of REXX, and in that language I've typed about 200 little macros. Some of those macros let KEdit be plenty good enough for typing in software.
E.g., I have about 4000 Web pages of .NET documentation from Microsoft's MSDN site. Many of the comments in my VB source code refer to some one of those pages by having the tree name on my computer of the HTML file; then a simple command displays the Web page. When reading code and checking the relevant documentation, that little tool works fine.
After all, VB source code and HTML code are just simple text; so are my Web site log files, the KEXX code, Rexx language scripting code, all the documentation I write either in the code or in external files (just simple text or TeX language input), etc. So, a good general purpose text editor can do well. And, "Look, Ma: I get to use the same spell checker for all such text!" The spell checker? ASPELL with the TeX distribution I use. It's terrific, really smart, blindingly fast, runs in just a console window.
For KEdit, it seems to load and run right away. I just looked and saw that what appears to be the main EXE file KEDITW32.exe is
1,074,456
bytes long -- not so bloated.
(3) Windows 10 Home Edition Reliability.
For a move, I got an HP laptop; it came with Windows 10 Home Edition. I leave it running 24 x 7. It hasn't quit in months. It appears that now the Microsoft updates get applied without stopping the programs I usually have running, e.g., KEdit, the video player VLC, Firefox, etc.
Using carefully selected options for ROBOCOPY, I do full and incremental backups of my files. I keep the ROBOCOPY log output; that output shows the data rate of the backup, and I have not seen that to grow slower over time. The disk in that laptop is rotating, and I've never done a de-fragmentation. So, I can't complain about performance growing slower from bloat, disk fragmentation, etc.
(4) Windows 7 64 bit Professional Server.
For a first Web server, I plugged together a mid-tower case with an AMD FX-8350 processor, 64 bit addressing, 8 cores, 4.0 GHz standard clock speed and installed from a legal CD and authentication code Windows 7 64 bit Professional SP1. As I left that pair running 24 x 7, occasionally it would stop with a memory error of some kind. I installed an update and never again saw any reliability problem in months of 24 x 7 operation.
Since then I looked into Windows 7 64 bit updates and concluded that (i) there was a big roll-up of about 2016 or some such; (ii) since then there have been updates and fixes monthly and cumulative since the big roll-up, and (iii) the updates for Windows 7 64 bits and Windows Server 2008 are the same.
I can believe that Windows Server 2008 long ran and still runs some of the most important computing in the world. So if my Windows 7 64 bit Professional has the same updates as Windows Server 2008, maybe for use as a server my Windows 7 installation will be about the most reliable major operating system in computing so far. Fine with me.
So, I am not screaming bloody murder about operating system or software reliability.
(5) Smart Phone Bloat and Reliability.
I have no problems with smartphones if only because I have no smartphone and don't want one: When smartphones first came out, I saw the display as way too small and the keyboard as just absurd. Heck, in my desktop computing I have a quite good keyboard but would like a better one and, of course, would like a larger screen -- no way do I want to retreat to an absurd keyboard and a tiny screen.
Next I guessed that there would be problems in security, bloat, reliability, system management, documentation, and cost. Maybe history has confirmed some of these guesses!
For a recent move, I got a $15 cell phone and did use it a few times. Then I junked it.
My phone is just what I want -- a land line touch tone desk set with a phone message function from my provider. Works fine. So, I have a phone with lower cost and no problems with keyboard, screen, security, bloat, or documentation. Ah, old Bell Tel built some solid hardware!
(6) Web Site Speed, Reliability, and Bloat.
My Web site apparently has few or no problems with speed, ....
Why? The site is simple, just some very standard HTML code sent from now old and apparently rock solid ASP.NET.
Fast? The largest Web page sends for just 400,000 bits.
The HTML used is so old that it should look fine on any device anywhere in the world with a Web browser up to date as of, say, 10 years ago.
The key to all of this? The KISS Principle.
YMMV!
Hmmm. Let's think about this.
I piled standout quotes below.
I think a big takeaway from the intersection of Bret Victor, Alan Kay, Jim Hollan and the ink&switch folks and your work is that the right dynamic interface can be the "place we live in" on the computer.
Victor shows a history of interactive direct manipulation interfaces, live environments where explorations of models or the creation of art go hand in hand with everything else related to that task: data input, explicit (programmatic) requirements and the visual output.
Hollan and ink&switch show the environment (ZUIs, canvas) can contain everything for doing work, the code alongside any manipulation of the viewport that can be conceived. Tools infinitely more advanced than Microsoft OneNote and designed 40 years ago.
From what I know about your work, I see another take on the environment I want to live in on the computer. I dont understand why I would want to lose power by stepping away from my language/interpreter/compiler/repl into a GUI or some portal when I can bring whatever it is which is nice about GUIs or portals into my dynamic computing environment. I very much want a personal DSL or set of DSLs for what I do on the computer, and I want to be able to hook into anything ala middle mouse button in plan9.
The superior alternative to walled gardens and this absurd world of bloat and 'feature loss' (for lack of a better term for software engineering's enthusiastic rejection of history) seems to be known, and facets of it advocated by you and these others. It seems clear that "using the computer" needs to return to "programming the computer" and that to achieve that we need to fundamentally change "programming the computer" to be a more communicative activity, to foster a better relationship between the computer and the user.
Where is this work being done now? VPRI shut down 2 years ago, Dynamicland seems to be on hiatus? I am inspired most these days by indie developers who write their own tools and build wild looking knowledge engines or what they sometimes call "trackers."[1] And of course the histories and papers put forward by the above and their predecessors. And I play with my own, building an environment where I can write, draw, code, execute and interact with it all. I see no existing product which approaches what I want.
> Everyone is busy building stuff for right now, today, rarely for tomorrow.
> Even when efficient solutions have been known for ages, we still struggle with the same problems: package management, build systems, compilers, language design, IDEs.
> You need to occasionally throw stuff away and replace it with better stuff.
> Business won’t care Neither will users. They are only learned to expect what we can provide.
> There’s no competition either. Everybody is building the same slow, bloated, unreliable products.
> The only thing required is not building on top of a huge pile of crap that modern toolchain is.
> I want something to believe in, a worthy end goal, a future better than what we have today, and I want a community of engineers who share that vision.
All of this bloated 'shitware' today is the result of it having been written by people who a) have no deeper understanding of what the computer is actually doing; also known as typical Python/Java/etc/etc/etc programmers, and/or b) simply not giving a damn about conservation of resources--as further evidenced by all of the other extremely wasteful and destructive habits they hold in their personal lives, and in their societies in general.
After all, this is the same civilization that's burning through increasingly vast quantities of oil at an astounding rate, despite the fact that previously existing abundant and cheap oil is nearly depleted, with no possibility of replenishment or replacement. So is it any surprise that foolish developers also burn through CPU and memory with reckless abandon?
Really, the problems we face aren't just in software; they're more about the foundations of our entire Western 'civilization.' Such problems generally tend to be rather intractable, in the historical view.
I'm working to construct, in my own computing life, something of a 'personal oasis', which is increasingly removed and estranged from all of the horrible things I see Other People out there having to suffer in their personal computing lives, thanks to talentless 'developers' who Just Don't Fucking Care. Some of these pricks actually have the audacity to call themselves 'engineers', even.
Effiency is a selling point that most users don't care much about in most markets. There are efficient browsers out there but everyone uses Chrome because those browsers are inferior to Chrome in many other ways. Ways that are more important to the average browser user.
If there's a market for a more efficient software solution, go make it and get rich. Otherwise, I'm getting sick of the complaining.
The least competent programmers are the once writing slow code. The least competent programmers are the ones working at the top of the stack.