I disagree. Our baseline for software has increased dramatically. If you don't care much about the added functionality or value of the new software, use Mosaic or Netscape 4.0 to browse the web.
There are obvious improvements in browsers which you are so used to that you forgot them: tabs, ability to zoom, process-level isolation for websites, support for newer protocols, CSS, Unicode support, font rendering things I'm probably not aware of, a built-in debugger and inspector in the browser, and so on.
Again, if you think that software hasn't advanced much, simply use the software from 10 or 20 years ago, and see if you can stand the experience.
I still use Jasc Paint Shop Pro 7 whenever I can. It's faster to startup and use than any of the versions that came afterwards based on .NET. The built-in effects are a bit outdated, but it still runs most Photoshop compatible plugins.
And I still run Windows 2000 in an isolated offline VM for some of my work. Even emulated, it's blistering fast on modern day hardware. The entire OS runs great in less RAM than a Blogger tab (assuming the article's numbers about 500MB - 750MB RAM are correct).
There is some excellent and efficient new software being made (Sublime Text and Alfred spring to mind), but please, don't give me another Electron-based 'desktop' app.
Operating systems themselves take advantage of this abundance of memory by also keeping things in-memory for longer - I remember my beefy 2GB/RAM computer from 10 years ago still paged processes and data out to disk when I had Photoshop CS and Firefox 2 side-by-side, but now that I have 32GB of RAM - and have done for the past 2+ years, I csnnot recall having experienced any disk-thrashing due to paging since then.
> I still use Jasc Paint Shop Pro 7 whenever I can. It's faster to startup and use than any of the versions that came afterwards based on .NET. The built-in effects are a bit outdated, but it still runs most Photoshop compatible plugins.
> And I still run Windows 2000 in an isolated offline VM for some of my work. Even emulated, it's blistering fast on modern day hardware. The entire OS runs great in less RAM than a Blogger tab (assuming the article's numbers about 500MB - 750MB RAM are correct).
> There is some excellent and efficient new software being made (Sublime Text and Alfred spring to mind), but please, don't give me another Electron-based 'desktop' app.
Ha, I too keep Paint Shop Pro 7 around for the reasons you mentioned.
And don't get me started on how text-centric websites used to load faster over a 56k connection than I can now get a JavaScript-abusing news site to load and render a simple article over fiber.
The problem is that we're not actually running applications on the web platform. We're running them on advertising platforms.
I'm pretty sure that roughly 99% of my browser CPU usage goes towards loading and running ads. And browsers are optimized for that task, which has consequences even when not running ads.
We have a business model and payments processing crisis, not a technology problem.
Now they've taken web apps and packaged them up to run their own web server and browser locally with even more abstraction layers that chew through your system resources.
What a strange world.
We had decent video, CSS, page update, notifications, unicode support and the like 10 years ago. We didn't gain that much since then. But the page load time took a X5 hit.
Yeah some UI is nicer, and we do have better completion, real time is snappier and all.
But hell.
We lost most pagination, pages feels slow, there are so many, sharing informations went down the toilet in exchange for big echo chambers.
The only thing that are clearly better is my hardware and bandwidth, both on mobile and computer, and professional service quality that went up in quality a lot.
The web itself, the medium, feels more sluggish than it should be given the power horse of infrastructure I got to access it.
If you look at where the bloat went, it's mostly three things:
- Images ballooned in size by a megabyte (without adding much extra requests)
Most likely culprits are retina class graphics and "hero graphic" designs.
- Javascript sizes and number of requests doubled
Corresponds to a shift from jquery + some manually added plugins to npm / webpack pipelines and using a massive amount of (indirect) dependencies.
- video
Now that flash has disappeared, it got replaced by auto-playing video, which is even worse as far as page bloat is concerned.
All the actually useful bits of the web, text, images, etc would still work. I bet you could still shop on Amazon with one of those old browsers.
The power is being sucked to run the 10Mb+ of JS, Flash and other crap that ads add to every page.
Thanks to Google Tag Manager, we don't even get a say anymore. It's out of our hands. We can put hours into optimising animations and UX but it is in vain once the marketing companies get their hands on the GTM account. With enterprise level clients near quarterly change of marketing provider, I'm sure many of the snippets aren't even being used anymore, they sit there because the new account manager is too afraid to remove them and no one truly understands what gets delivered to the customer's browser.
We will spend immense time architecting performant backend solutions and infrastructure, then we decide that tens of thousands of completely disjointed lines of javascript, that we've never seen unminified and have no idea who wrote it, is probably fine to deliver straight to the browser. Don't worry, caching will solve the network problem, and everyone has quad-core smartphones now so it's probably okay.
Weird, the screen keeps flickering and scrolling isn't smooth. Oh well, probably a browser bug.
If I didn't have to trade files with others I could quite happily use Microsoft Office 97 in lieu of whatever the new thing is called.
The issue with web browsers is only slightly more complicated. I'd love to go back to a world where web pages didn't try to be computer programs, but that's obviously not going to happen for a while.
I could go back even further and use Microsoft Works.
But people complaining about the poor use that today's software makes of today's hardware are usually not talking about games.
The best games I played recently are all indie stuff that I could have played on a much older machine.
But I think the popularity of "retro" aesthetics and mechanics signals that progress in games is not at all linear.
Yeah, but that's not a consequence of better software!
My goto example is Blender. It has a footprint of 300MB, starts within 1-2 seconds and uses 70MB on startup.
Compare this to a program like 3ds Max and you will ask yourself where all your resources went.
I think today most software loads everything on startup, while programs like Blender use a modular approach.
But perhaps that's only because we had more time to figure the requirements out.
(For comparison, I ran Netscape 4 on a Pentium 100 with 64 megs of RAM.)
Try 1987 Ventura Publisher. That's 30 years ago. You can publish a whole magazine or a 1000 page book on a modest (<4MB) PC with 286 processor with it. It has a great GUI, excellent usability and, dare to say, isn't slow at all.
On the IDE side, the IDE environments for Lisp on the Lisp machines of the early 80s, or the Smalltalk environment for the early 80s Xerox computers have nothing to envy the modern IDE environments that are commonly used for languages like Java.
old versions of visual studio, vim, emacs, all come to mind.
If I can have the web from that era to go with it, I'm happy.
Wouldn't your users appreciate more features than optimisations most of them aren't going to notice? For the same block of time today compared to decades ago, you're going to be creating an app that uses more memory and CPU but has a lot more features. Developer time isn't free and needs to be justified.
I used to write DOS apps for 66MHz PCs and you'd spend an enormous amount of time optimising memory and CPU usage. This was similar for the initial batch of Android phones as well as you didn't get a lot of resources (e.g. loading a large image into memory would crash the app). Now I can create more features in way less time and rarely have to think about optimisations unless dealing with a lot of data.
I think expecting every software developer to release software that uses the absolute minimum amount of CPU and memory possible is completely unrealistic. The people commenting that developers don't know how to write optimised code anymore have different priorities to a typical business. I know how to make low level optimisations and have even resorted to assembly in the past but I'm only going to these lengths when it's absolutely required.
For commercial projects, it doesn't make any business sense to optimise more than is needed even if that makes software developers squirm.
Allowing video content, using images and a JS framework is alright. Not making sure the first load is under 1mo is, however, unprofessional.
I get that some sites do need big pages: video tubes, big SPA, etc. But most sites are not youtube or facebook. If your blog post takes 3Mb to display text, it's doing it wrong.
> If your blog post takes 3Mb to display text, it's doing it wrong.
I agree if it's not much effort and it makes a big difference (e.g. to mobile users) you should make the optimisation.
I'm more talking about people saying that instead of having one Electron app to support multiple platforms, a developer should be writing several native apps. The latter is a huge amount of development time to save a few 100MB of RAM that most users probably wouldn't notice anyway. Nice to have but it doesn't make business sense most of the time.
I think what operating systems should do is to allow users set per app/website quotas and use sensible defaults.
Developers should get the message that no we can't use all the resources available on a given system just for our particular app.
This sounds sort of like what classic Macs did: there were some settings in the application's "Get Info" dialog to adjust the "current size" of the application (along with a note as to what the "preferred size" was); if there wasn't enough free memory the program would refuse to start.
In practice, this is a terrible idea: it generally turned into "well, this program isn't working right, what if I fiddle with it's memory allocation?" and did nothing to actually help the user. (To be fair, the setting was necessary as a result of technical limitations, but I don't see it working out any better if it was implemented intentionally.)
Commercial developers will use as much memory as they can get away with; OS vendors would disable the quota system (or set the quota to 100% of whatever RAM the device has) since they don't want apps to "break" on their laptop/tablet/phone and work on their competitors'.
I can't see how this would work. What would the sensible default be for a computer game? A text editor? An IDE? A video editor? This kind of thing is way over common users heads to set their own quotas as well. How would you avoid "This app wants to use more resources than the default. Allow?" popups with low defaults?
In any ways, macOS that I am running as my personal system probably ain't gonna get that feature anytime soon.
Tech Debt is rewarded.
Doing something for the first time, almost by definition, means one does not really know what one is doing and is going to do it somewhat wrong.
Hiring less skilled labor (cheap coder camps for example) to implement buzzword bingo solutions gets you into a place where all the software contains large chunks of it's substance coming from people doing it for the first time... and not 100% right.
As we never go back to fix the tech debt we end up building stills for the borked and less than great. When that structure topples over we start over with a new bingo sheet listing the hot new technologies that will fix our problems this time round for sure.
I'd think that a good fraction of the current language expansion is that the older languages are too complex and filled with broken. Green fields allow one to reimplement printf, feel great about it, and get rewarded as a luminary in the smaller pond.
.....
oh... and well the cynic in me would argue planed obsolescence at the gross level. No one buys the new stuff unless there's new stuff.
BTW is "stills" a typo for something? (shims?)
The Mythical Man-Month has a chapter on this titled "Prepare to throw one away". Brooks argues, that the program/OS/whatever you build the first time around should be considered a prototype. Then you reflect on what problems you encountered, what went well, and so on and use those insights to guide you when start over.
It seems like such an obvious idea, but Brooks wrote that almost 50 years ago, and it seems like only very few people listened. Primarily, I guess, because much software is already written under highly optimistic schedules - telling management and/or customers that you want to throw the thing out and start over is not going to make you popular.
I think the biggest culprits are abstraction layers. You use a framework that use js that use a VM in a browser that runs on an OS. The time when showing text was done by the application writing a few bytes in VRAM is long gone. Each layer has its internal buffers and plenty of features that you don't use but are still there because they are part of the API.
Another one is laziness : why bother with 1MB of RAM when we have 1GB available? Multiplied the number of layers, this becomes significant.
Related to laziness is the lack of compromise. A good example is idTech 5 game engine (Rage, Doom 2016). Each texture is unique, there is no tiling, even for large patches of dirt. As expected, it leads to huge sizes just to lift a few artistic constraints. But we can do it so why not?
Another one is the reliance on static or packaged libraries. As I said, software now use many layers, and effort was made so that common parts are not duplicated, for example by using system-wide dynamic libraries. Now these libraries are often packaged with the app, which alleviate compatibility issues but increase memory and storage usage.
There are other factors such as an increase in screen density (larger images), 64 bits architectures that make pointers twice bigger than their 32 bit counterparts, etc...
Related: just went to a news site that had this absolutely gorgeous, readable font called "TiemposHeadline" for the headlines. That font was loaded through the @font-face interface of CSS, so that's probably somewhere in the ballpark of 1M.
That I so casually navigate to find the name of the font and how its getting loaded is due to devTools, which is tens of megs of storage space plus whatever chunk of memory it's eating.
I think examples like Doom are misleading. Those are essentially hand-compressed diamonds of software. Some of the resource-saving techniques used there are bona fide legends. It should come as no surprise that an increase in availability of RAM and storage space causes the software to accordion out to fill the available resources in return for ease-of-development.
That said, I'm all for software minimalism movements as long as the functionality and usability remains roughly the same.
The rationale behind megatexture is that storage capacity increase exponentially but our perception doesn't. There is a limit to what our eyes can see. In fact for his future engines, John Carmack wanted to go even further and specify entire volumes in the same way (sparse voxel octrees).
And sure, the way megatexture is implemented is really clever, and yes it is for a good reason, but it doesn't change the fact that it makes some of the biggest games on the market (Doom is 78GB)
When I said no compromise, it is no compromise for the artists. The whole point of megatexture is to free them from having to deal with some engine limitation. They don't have to be clever and find ways to hide the fact that everything is made of tiles, they just draw. And yes, this is a good thing, but a good thing that costs many gigabytes.
Yeah, it probably doesn't really help the situation when you build several layers of abstraction in an interpreted language that runs in a virtual space on a virtual machine using another abstraction layer running on virtual cores that go through complex OS layers that somehow eventually possibly map to real hardware.
As a developer, you rarely care about memory usage; as a web developer, you have limited influence on CPU usage. And since most managers care only about getting the project done on time and within the budget, this is what most developers concentrate on.
I think that is the crux of the issue succinctly put.
Software that comes out of nonprofits or the free software movement is arguably better built and treats the user better.
Software makes the world more complex faster than we can understand it, so even though we have more knowledge we understand less about the world.
We used to know how cars work. We used to know how phones work. Now we don't, and never will again.
The implications are unsettling.
Imagine a world populated entirely by IOT devices. Imagine, for a moment, starting with a blank slate and trying to make sense of these devices using the methods of Science. They are so complex and their behavior governed by so much software that it'd be impossible to make a local model of how the device actually worked. You simply would not be able to predict when the damn thing would even blink its lights. When the world gets to this point...One would have to understand how software worked, in many different programming languages; kernels, scripts, databases, IO, compilers, instruction sets, microarchitecture, circuits, transistors, then firmware, storage...it'd be impossible to reverse engineer.
I mean how many times has someone called you a Wizard when you've fixed some random piece of tech or gotten something working for them?
"Yeah nah but when you turn on input A, input B turns off so we know how B works."
Philip K. Dick wrote a short story on this, some 60 years ago: https://en.wikipedia.org/wiki/Pay_for_the_Printer
AOT to native code via NGEN, or JITed on load.
The only .NET variant from Microsoft that wasn't compiled to native code before execution, was .NET Micro Framework.
Now, .NET and JVM might be a problem since those VMs tend to be resource hogs (after all both use GC methods that allocates tons of memory whereas something like Python or even classic VB use reference counting - even then there are languages that aren't using reference counting but some other method of GC and still are fast). But i don't think you should put all interpreted languages at the same box.
Also you should not put all GC languages in the same box, as many do allow for AOT compilation to native code and do support value types and GC-free memory allocation as well.
It is cheaper to develop a .Net app than a C app. Cheaper in Development and maintenance.
It is cheaper to not care about efficient data management, or indexed data structure.
What we're losing in efficiency, we gain in code readability, maintainability, safety, time to market, etc.
I think this is true but I disagree that it's inherently true. Slapping a UI together with c and GTK is pretty strait forward for instance, here is a todo list where I did just that (https://gitlab.com/flukus/gtk-todo/blob/master/main.c). It's not a big example, but building the UI and wiring the events was only 40 lines of code, it's the most straightforward way I've every built a UI. More complicated things like displaying data from a database are harder, but I think this comes down to the libraries/tooling, the .net community has invested much more time in improving these things than the c community.
> What we're losing in efficiency, we gain in code readability, maintainability, safety, time to market, etc.
I don't think we've exhausted our options to have both. We can build things like DSL's that transform data definitions into fast and safe c code for instance. Imagine instead of something like dapper/EF doing runtime reflection we could build equivalent tools that are just as easy to use but do the work at compile time. Or we could do it via rusts kickass compile time meta programming.
https://www.devexpress.com/Products/NET/Controls/WPF/
Also are you sure your C app will have a high score on clang and gcc sanitizers?
However, if you pick .NET vs Delphi you will see is easier to develop UI apps (and general apps) in Delphi than .NET.
Even today.
The exception is web apps, and web apps is another big problem that "normalize" bad architecture, bad language and bad ways to solve everything.
But I'd definitely be interested in any studies that have tried to measure these over long time periods.
In fact I think there is a sort of casual indifference to the first two that frequently borders on criminal neglect. Why bother with them when the "time to market" driver of methodology selects for the most easily replacable code?
Safety is also debatable and mostly accidental. Most of the languages that are fast and "easy" to develop in rest on a core of C or C++, and are really only as safe as that code. Safer, because there may be fewer foot guns, but not necessarily "safe."
In the last 5-10 years, there hasn't been almost increase in requirements. People can use low-power devices like Chromebooks because hardware has gotten better/cheaper but software requirements haven't kept up. My system from 10 years ago has 4gb of ram - that's still acceptable in a laptop, to say nothing of a phone.
If you're going to expand the time horizon beyond that, other things need to be considered. There's some "bloat" in when people decide they want pretty interfaces and high res graphics, but that's not a fair comparison. It's a price you pay for a huge 4k monitor or a retina phone. Asset sizes are different than software.
I won't dispute that the trend is upward with the hardware that software needs, but this only makes sense. Developer time is expensive, and optimization is hard. I just think that hardware has far outpaced the needs of software.
In the case of front-end development also "Developer time is paid by the company while hardware is paid by the users."
This is basically a nicer way to put the "lazy developers" point from the article, but I think that's actually important.
The problem is that this seems to create all sorts of anti-patters where things are optimized for developer-lazyness at the expense of efficiency. E.g., adding a framework with layers of JavaScript abstraction to a page that shows some text - after all, the resources are there and it's not like they could be used by something else, right?
We as a society have voted with our wallets that yes, we really really want a process that is efficient on creating more features within the same amount of developer-time instead of a process that creates more computationally efficient features.
The increased hardware capacity has been appropriately used - we wanted a way to develop more software features faster, and better hardware has allowed us to use techniques and abstraction layers that allow us to do that, but would be infeasible earlier because of performance problems.
It's not an anti-pattern that occurred accidentally, it accurately reflects our goals, needs and desires. We intentionally made a series of choices that yes, we'd like a 3% improvement in the speed&convenience of shipping shiny stuff at the cost of halving the speed and doubling memory usage, as long as the speed and resource usage is sufficient in the end.
And if we get an order of magnitude better hardware after a few years, that's how we'll use that hardware in most cases. If we gain more headroom for computational efficiency, then we've once again gained a resource that's worthless as-is (because speed/resource usage was already sufficiently good enough) but can be traded off for something that's actually valuable to us (e.g. faster development of shinier features).
Sites cause the UI to hang for seconds at a time. Switching between tabs can take an age. Browse to some technology review site and you sit and wait for 10 seconds while the JS slowly loads a "please sign up for our newsletter" popover.
We wanted multi-tasking OSes, so that we could start one program without having to exit the previous one first. That made the OS a lot bigger.
Eventually, we got web browsers. Then Netscape added caching, and browsers got faster and less frustrating, but also bigger. And then they added multiple tabs, and that was more convenient, but it took more memory.
And they kept adding media types... and unicode support... and...
We used to write code in text editors. Well, a good IDE takes a lot more space, but it's easier to use.
In short: We kept finding things that the computer could do for us, so that we wouldn't have to do them ourselves, or do it more conveniently, or do something at all that we couldn't do before. The price of that was that we're dragging around the code to do all those things. By now, it's a rather large amount to drag around...
For example IDEs: Visual Studio in 2017 is certainly better than Visual Studio in 1997, but do those advancements really justify the exponential growth in hardware requirements?
How'd we get so little usable functionality increase for such a massive increase in size/complexity?
Fundamentally, modern systems are much more usable - in every sense of the word. Modern IDEs are more accessible to screen readers, more localizable to foreign languages including things like right-to-left languages, do more automatic syntax highlighting, error checking, test running, and other things that save developer cycles, and on and on. Each of these makes the program considerably more efficient.
Is it? 97 might be a bit extreme, but the other day I opened an old project which was still on VS2010 and I was struck by how much faster 2010 was while still having nearly every VS feature that I wanted. They're slowing porting VS to .net and paying a huge performance penalty for that.
For example, the web might be filled with redundant and bloated software, but the real problem is that the browser has become the nexus of virtualization, isolation, security, etc. for almost everyone from the causal user to hardcore admins and for every piece of software from frameworks/utilities to full-blown SAPs. It's like we have all reached a common understanding for what comprises a "good" application, but then we lazily decided to just implement these things inside another app. I mean, webassembly is great an all, but is it wise?
I don't think it's about IPC or RAM or (n+1) framework layers that each include "efficient list functions". I think it about the incremental, deliberate, and fallacy-laden decisions that assign more value to "new" than to "improved".
So just loading from disk, decompressing, put on Ram then moving around and applying visual effects is probably half of the reason everything is slow. Those "bloat" graph should also mention screen resolution and color depth.
I might have done the same on my 2000 era machine as I do today (browse the web, listen to music, program, maybe edit a picture or two) but I'll be damned if I had to do all this in Windows 98 with a resolution of 800x640 again!
We could waste some words here how display resolution didn't keep up due to Windows being crap and people being bamboozled into 1366x768 being "HD" or "HD Ready". 800x600 vs 1366x768 that's only double the pixels and barely more vertical.
In any case, glad I was hosted on Google infrastructure but embarrassed by the bloated, default Blogger template.
Interested in suggestions for simple, lightweight alternative.
Medium, Square, WordPress, etc. all seem to suffer from similar insane bloat.
Edit: There was kind of a nod, but more about the Blogger editor than the reader view.
https://en.wikipedia.org/wiki/The_Mythical_Man-Month
Note NASA controlled the moon missions with an IBM 360/95 that had something like 5 MB of RAM, 1GB of disk, and about 6 million instructions per second.
Today an internet-controlled light switch runs Linux and has vastly larger specifications. Connecting to WiFi is more complex than sending astronauts to the moon!
And an army of technicians available around the clock to keep it working. Whereas your IoT light 'just works' and isn't expected to require any support at all.
As for the high complexity of IoT things, I don't think the extra complexity helps reliability, security, etc.
Maybe a lot of what those computers were doing were just raw computations, much like a DSP, to control trajectories, and nothing more. Something like a big calculators, with some networking capabilities to receive and send a few sets of instructions.
NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
0 979612 178064 108084 S 0,0 1,1 0:01.38 chromium-browse
0 3612044 175208 128552 S 0,0 1,1 0:01.83 chromium-browse
0 1372444 92132 67604 S 0,0 0,6 0:00.27 chromium-browse
0 1380328 90492 58860 S 0,0 0,6 0:00.62 chromium-browse
0 457928 87596 75252 S 0,0 0,5 0:00.67 chromium-browse USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
user 9309 59.0 4.6 2020900 282568 pts/2 Sl 09:46 0:01 /usr/lib/firefox/firefox
user 9369 8.0 1.7 1897024 104800 pts/2 Sl 09:46 0:00 /usr/lib/firefox/firefox -contentproc -...
However, When I look at the "available" column in free(1), it looks much better. Only 170 MiB increase when Firefox is started: $ free
total used free shared buff/cache available
Mem: 6115260 598900 4324616 41748 1191744 5205960
Swap: 0 0 0
$ free
total used free shared buff/cache available
Mem: 6115260 781304 4116456 68908 1217500 5032360
Swap: 0 0 0 PID COMMAND %CPU TIME #TH #WQ #PORT MEM PURG
44688 Safari Techn 0.0 00:03.41 10 3 311 51M 2648K
Safari (original) uses ~300-400MB with 20-100 tabs open. I think they use a different way of storing tabs which are not on your screen at that moment.(I didn't check Safari (original) for nothing-open-and-idle RAM usage, simply because I didn't want to have to reload all my tabs, so I booted Safari (Technology Preview) to quickly test it.)
The article is focused on machine efficiency. Human efficiency also matters. If you want to save cycles and bytes then use assembly language. Geez, we had the assembler vs fortran argument back in the 70's. Probably even in the 60's but I'm not that old. Guess what? High level languages won.
Next.
Hey, Notepad is much smaller, leaner, and faster than Word! Okay. But Word has a lot of capabilities that Notepad does not. So is it "bloat" or is it "features" ? Guess what, Windows is a lot bigger and slower than DOS.
Imagine this argument from a business person:
If I can be to market six months sooner than my competitor for only three times the amount of RAM and two times the amount of CPU horsepower -- IT IS WORTH IT! These items cost very little. You can't buy back the market advantage later. So of course I'll use exotic high level languages with GC. I'm not trying to optimize machine cycles and bytes, I'm trying to optimize dollars.
This is a ridiculous straw man. It is completely possible to write efficient software in high level languages, and no one is suggesting people extensively use assembly language. Actually, in many cases it is very difficult to write assembly that beats what's generated by an optimizing compiler anyway.
Dev time is the most expensive resource.
> The Blogger tab I have open to write this post is currently using over 500MB of RAM in Chrome.
If that is so, why post it and have it use a similar amount of RAM on others machines? If they know sooo much about software, why even use Blogger, a site that's heyday was 15 years ago?
At a big company could argue requirements writers need to be technical, but once you've done a startup you'd know that you're in an army and not in a fixed building that needs architected once. The customer and your enemies are always on the move and you have to keep moving on new terrain as the money and your business keeps moving there. Build with code of conduct for your developers, and allow the code base to evolve with modules, or some other approach.
It makes hardware cheaper for everyone, even nerds who use terminal and lightweight WMs ;-)
I understand this is edge case material though so such ideas need not apply, but it seems like a fairly easy to implement idea that is one of those "oh that's really nice" features customers stumble across.
One cost of the increasing breadth in the industry is that if we want to have a lot more software then obviously it can't all be written by the same relatively small pool of expert programmers who wrote software in the early days. With more software being written by people with less skill and experience, many wasteful practices can creep in. That can include technical flaws, like using grossly inefficient data structures and algorithms. It can also include poor work practices, such as not habitually profiling before and after (possibly) optimising or lacking awareness of the cost of taking on technical debt.
Another cost of the increasing diversity in the software world is that to keep all that different software working together, we see ever more generalisations and layers of abstractions. We build whole networking stacks and hardware abstraction libraries, which in turn work with standardised protocols and architectures, when in days gone by we might have just invented a one-off communications protocol or coded directly to some specific hardware device's registers or memory-mapped data or ioctl interfaces.
There is surely an element of deliberately trading off run-time efficiency against ease of development, because we can afford to given more powerful hardware and because the consequences of easier development can be useful things like software being available earlier. However, just going by my own experience, I suspect this might be less of a contributory factor than the ones above, where the trade-off is more of an accident than a conscious decision.
But I do agree that software could be a bit faster nowdays.
I'm currently on a Mac and a PC. The Mac's CPU is at maybe 5% when I'm doing most things.
The PC is at 1%.
I'm using half the PC's memory and 3/4 of the Mac's.
These are no up to date, high memory or high performance machines.
Have a look at your own machine. Surely for most of us it's the same.
And that memory usage is mostly in one application - Chrome. The only bloat that hurts a bit now is web page bloat. And on a good connection this isn't an issue either.
It's also different on phones where application size and page size seems to matter more.
I have a 5 minute mp3 that takes more space than my first hard drive had and some icons on my desktop that take more space than my first computer had RAM.
Whether that will continue to hold I don't know, mobile has certainly pushed certain parts back towards caring about efficiency (though more because it impacts battery life).
If you remove a constraint people stop caring about that constraint.
The old school geek in me laments it sometimes but spending twice as long on developing something to save half the memory when the development time costs thousands of dollars and twice as much RAM costs a couple of hundred seems..unwise.
Has it? Try using a low end phone with something like 8GB of internal storage, mobile apps are ridiculously slow and bloated. It's to the point where I haven't looked in the play store for years because I simply don't have enough room on my phone. That means the dev community has screwed itself over with wastefulness.
In the past, computational complexity was lowered by arbitrary size limits. e.g. if you had a O(n^2) algorithm you might cap n at 10 and now you have a O(1) algorithm. Job done.
Now, computational complexity is lowered by aggressive use of indexing, so you might lower your O(n^2) algorithm by putting a hash table in somewhere, and now you have an O(n) algorithm. Job also done.
The practice of putting arbitrary size limits on everything has almost died out as a result.
There are also a few graphs to make the author feel like he is a scientific researcher writing a paper instead of what he is actually doing , which is posting a question that quite frankly could with little extra thought fit in a tweet.
It's a serious problem with no clear solution.
It seems to me that new languages prioritize quick iteration over effective machine operation. The easier a language is to write and interpret, the faster an outfit can churn out an application. The exponential computing power growth has been sufficient enough to absorb these collective "shortcuts". Thus, it is not being taken advantage of properly.
The CSCI/Engineering fields have become more of a gold rush than thoughtful trades. Boot camp management seeks profit, and trainees seek to quickly fill high paying jobs. It all culminates into this situation where code doesn't need to be clever and thought out - just created A.S.A.P to handle whatever trending niche market or "low-hanging fruit" there is. The work of these products get handled server side, where electrical costs for cooling is a fundamental expense.
From a fundamental level though, my hunch would be how modern development takes modularization / abstraction to a type of extreme. Imagine a popular Node.js module and how many dependencies it has and how many dependencies its dependencies have.
It's not hard to imagine a lot more computing power is required to handle this. But that's ok to decision makers, computing power is cheap. Saving developers time by using modularized developments brings more cost/profit benefits, like what Dan said.
PS: the link on Visual Studio. Oh wow, what fond nostalgic memories it brings me :)
Users are showing "a preference for faster smaller software" - the author of that blog is one of them, and I'm another. But even the best software has to be passed to marketers before it can reach your hands. There are some small, efficient programs out there, but they're overlooked because they don't pay.
So, why are you using Blogger instead of emacs/vi/notepad to write a static HTML page?
Apparently the author seems to think that all that bloat DOES give him something, no?
Have people really forgotten their computing history so soon?
Let's roll back the clock. Windows 95 ran for a total of 10 hours before blue screening. Windows ME ran for -2 minutes before blue screening and deleting your dog.
Roll back further. IBM was writing software not for you. Not for your neighbor. They were writing software for wealthy businesses. Bespoke software. Software and hardware that cost more than you make in a lifetime.
Software, today, represents responses to those two historical artifacts.
1) At some point software became complex enough that we discovered something we didn't know before ... programmers are really bad at memory management. Concurrently, we also realized that memory management is really important. Without it, applications and operating systems crash.
And yes, this point was hit roughly around Windows 95. You really couldn't use Windows 95 for more than a day without something crashing.
So the programming ecosystem responded. Slowly and surely we invented solutions. Garbage collected languages and languages without manual memory management. Java, .NET, Python, etc. Frameworks, layers of abstractions, etc.
Now fast forward to today. I'm absolutely shocked when an app crashes these days. Even games have become more stable. I see on average maybe 1 or 2 crashes in any particular game, through my _entire_ playthroughs. And usually, the crashes are hardware bugs. I haven't seen a Firefox crash in ... months.
This is leaps and bounds better. Our solutions worked.
The caveat, of course, is that these new tools use more memory and more CPU. They have to. But they solved the problem they were built to solve.
2) In the "good old days" software was bespoke. It was sold strictly B2B. For a good long while after that it remained a niche profession. Does no one remember just how expensive software and hardware used to be? And people scoff at $600 phones...
But software exploded. Now everyone has a computer and software is as ubiquitous as water.
With that explosion came two things. Software got cheaper. A _lot_ cheaper. And software filled every niche imaginable.
When software was bespoke, you could get the best of the best to work on it. Picasso's and Plato's. But those days are long gone. Picasso isn't going to make Snapchat clones.
We needed a way to allow mere mortals to develop software. So we created solutions: Java, JavaScript, Python, .NET, Ruby, etc. They all sought to make programming easier and broaden the pool of people capable of writing software.
And just like before, these tools worked. Software is cheap and plentiful.
We can bemoan the fact that Slack isn't a work of Picasso. But who wants to pay $1 million per seat for Slack? Instead, Slack is free in exchange for the sacrifice of 4GB of RAM.
The lesson here is two fold. Software today is better than it ever was, and it will continue to get better. We've learned a lot and we've solved a lot of problems. Battery constraints are forcing the next evolution. I would never have dreamed of a world where my laptop lives for 10 hours off battery, but here we are. I can't wait to see what the next decade holds!
As mentioned elsewhere, web apps frequently "crash" (ie. fatal JS error) and have strange behavior.
And those bugs are often platform/browser/version-specific so very difficult to fix.
I still have the same experience with Win10. Hardly a week without a BSOD. On the same machine linux flies and flies.
> Even games have become more stable. ... I haven't seen a Firefox crash in ... months.
Yet Firefox and most AAA games are written in C++.
C++ is a very different language from what it was.
But it's buggy as anything. Selection is unreliable, redraws go wrong. It's lacking lots of tools (align, size group to smallest/largest etc) that are standard in most vector tools. It can't save undo history between sessions.
I guess that's better than my Omnigraffle workflow of 5 years ago? It's certainly cheaper.
Of course, I did not have much of a clue what I was doing back then.
Although some have said that Windows 2000 was more stable, I found the fusion of 95/98 into the NT kernel made it less so.
"The first 90% took 2 weeks to finish. The second 90% also took 2 weeks to finish (and now your 99% done). The next 90% also takes two weeks to finish (99.9% done)..." Reapplied to another resource..."The first 90% consumes 1GB of RAM. To solve the next 90% of the problem, takes 1GB of additional RAM...
If you continue this trend, the problems solved in the incremental steps maybe used fractionally less often, but are probably also more complex and required a greater resource investments. Our software does a lot more, but the later developed parts are usually used left often and are more complex. Talking to the one and only ship headed to the moon when you don't particularly care who hears you is less difficult than securely purchasing things online over a WiFi connection. At the user experience level its just "thing A talks to thing B" but the later case has also had to solve n-th 90% issues of congestion and security and handshake and...
That being said, we rarely go back and see what in the earlier iterations are now based on false assumptions. So there probably is a fair amount of accumulated cruft with no clear detector for what is cruft and what is essential.
For those interested in exploring a minimalist approach, it's worth checking out http://suckless.org/philosophy and https://handmade.network/manifesto
"And somehow our software is still clunky and slow."
It is? I haven't really noticed. It seems to me that my 9 year old desktop still runs most modern software reasonably well. My new laptop runs it much more quickly than machines from 15-20 years ago.
Granted, in an abstract sense, CPU usage and memory consumption has grown a bit, but the actual user experience is better.
Also, it seems to me that the optimisation on the web is done on speed at the detriment of memory usage.
Put some control back in the user's hands, prevent less run-away bad behaviour from the apps.
Mate, you picked /blogger/ as your preferred blogging platform. Blogger can't even deliver a page title without Javascript.
There's a whole world of much higher quality software out there, it may be that you've chosen not to use it.
This dude should try switching his blog to Pelican[1], it might be something of a revelation.
After Apple stopped supporting 32 bit x86 Macs years ago, I decided to put Windows 7 on my old 2006 era Core Duo 1.66ghz Mac Mini with 1.25GB of RAM. My parents still use it occasionally. It can still run an updated version of Chrome and Office - not at the same time of course - and it isn't painful.
My Plex Server is a 2008 era Core 2 Duo 2.66Ghz Dell business laptop with 4Gb of RAM.
You might see individual applications do the same things, and consume more resources due to the layers and layers of abstractions, BUT be cheaper to build. As a consequence, many, many, many more applications being built, for cheaper, reaching more people, "eating the world".
He shouldn't ask "when" though but who made it go off rails (sic!) and why.
Even though features might be added in a linear fashion (and I think that's not true either - the teams that build large applications have grown too), the complexity the whole system might scale as the square of the number of features, or exponentially. That is: if word2017 has 10 times the number of features as Word6.0, we should not be surprised to see CPU and RAM requirements be 100 or 1000 times higher.
Finally, just like a memory manager in an OS makes sure all memory is actually used, software should be using the computers' resources. If an average computer now is 4x3Ghz then a foreground application such as a photo editor should have features that at least some times puts that hardware to good use. Otherwise there was no point in having that hardware to begin with. As software developers we should aim to scale our software to use the available hardware. We should not just let twice as fast hardware run our software twice as fast.
I don't know whether the "square of the number of feature" claim is true, but we have certainly made our system huge for no reason—besides perhaps making it more convenient for programmers.
I go to http://artelabonline.com/home/ every once in a while—which will crash if many people click on this because there are only 128MB of RAM—which I built in 2009. With the worse PHP one can think of, no CDN, etc., it's lightning fast. Websites nowadays are over 10MB, and most of that crap are JavaScript libraries and frameworks. Most apps I use daily are Electron apps, which contain a whole copy of a browser even though I already have one installed, and routinely take up 700MB of RAM to show me an app that is actually a web page.
I believe that the problem is that programmers are doing things for themselves, and not users—which is eventually who will use the product. Electron is a great example. That, mixed with this idea that a little control panel for a client who has to check 10-20 orders for his store should be built with the same framework used by the most visited website in the world.
Well I was being perhaps a bit deliberately controversial
> for no reason—besides perhaps making it more convenient for programmers
That's a massive and excellent reason to make a system consume more resources. In fact I think it's probably the main reason programs do! If new feature X can be done in 1 man-week and consume Y resources, it's entirely possible that it can be done in such a way that it consumes just 1/4 of those resources. That might take 10 man-weeks (and/or much better devs). So you don't, because users generally aren't willing to pay for that. Basically only a few very niche products do this (game engines, embedded, ...). Basically, the economics of adding feature X was such that if it can't use a huge amount of resources, the buyer can't afford it. So it uses a lot of resources, becaue the buyer wanted the feature.
> I believe that the problem is that programmers are doing things for themselves, and not users—which is eventually who will use the product. Electron is a great example.
This is partyly true. Electron (and similar) is an excellent example of the economics above. I also can't believe how someone can write a chat client that uses 1Gb of ram in any universe. But the economics were such that JS developers, unfortunately were easy to come by, and a browser engine with DOM (of all things) was the best way to get a cross platform UI running with these developers. So the arrival of Slack was really just like any other feature. Someone wanted a cross platform shiny group chat application, and they wanted it now and not in 10 years, and they wanted it to cost reasonably little. The answer to that, unfortunately, was "ok but it'll cost you two cpu cores and a gig of ram".
Was it just for the developers? well, partly. But indirectly it's for the users who weren't going to PAY for C++ devs to write slick and lean native versions of this software
Bottom line: every user has a limited amount of money and a limited amount of computer resources. When given a choice, my experience is that users are much more willing to pay with more resources and less money, than vice versa. The important thing to remember is that the two are connected - a program that takes less resources is more expensive.
The unfortunate thing is that Blogger started out being fairly light and clean -- at least it was back when I migrated to it.
At least static site generators are coming back into fashion. I've always considered them a better solution for media/news/blog type sites.
The browser is the big one, you're running an operating system inside of another.
The IDE not being able to target a true cross platform binary has hampered us.
The mouse has made developers lazy and et them put interfaces in clunky places.
(contradicting myself) Not using the mouse for coding anymore (see the downfall of the IDE)
And OS and browser vendors should have allowed binaries to run at a higher ring level, IE run something similar to dos inside of a browser, I would much rather cross compile to the major architecture than code in HTML/CSS/JS.
Take this laptop: spinning platter drive and 4GB of RAM. I'd be happy if it could do more with the same hardware, and I think that's the core of what we're talking about.
StarCraft analogy is far removed from real life user and business cases, all of which pay for the hardware with real money and time.
//The Engineer
It would be great to walk the walk when you post about this kind of topic, by using a simple HTML web page with just the images you need to aid in presenting your argument.
Though I get to produce programs when it makes sense, I spend a lot more time writing prose and communicating with others inside and outside the immediate engineering team in which I work. I also spend a lot of time chasing down problems in existing software; each new failure provides an opportunity to improve our overall practice. Mistakes can and will always happen; negligence must not.
To suggest that there isn't, or cannot be software engineering is maddeningly self-defeating.
Source: half assed personal opinion base in 10+ years experience