Of course if you're touching the same code over and over again, that's probably a sign you're not solving the problem well. Solving problems with a finality such that they don't have to be reopened again and again is the aim of good software engineering. That is true.
But in order to get there, you need to periodically add a major new capability—and thus a major new assumption—to your architecture. If you're not making these kinds of sweeping changes every now and then it's almost certain that you're not actually doing deep refactors that resolve longstanding challenges in lower energy ways.
Resistance to these kinds of sweeping changes amongst professional developers and their ambassadors at major platform companies is the reason why Windows, iOS, Oracle, and the like have stable interfaces and smaller audiences. On the web you don't need permission to write a whole new JavaScript framework, so people can take those first big steps, even though they are quite painful for the developer.
Lisp I Programmer's Manual, 1960, Page 67
> ... that immediately after all the triplets have been
> evaluated the state of the memory as it stands is read out
> onto tape 8 as the new "base" image for the memory. ...
http://history.siam.org/sup/Fox_1960_LISP.pdfhttps://news.ycombinator.com/item?id=13076098
I quote it in full, here:
OK, if you promise to stay off my lawn, I'll explain the history behind undump. Back in the 70's, the big CS departments typically had DEC 36-bit mainframes (PDP-10, PDP-20) running the Tops10/Tops20/Tenex/Waits/Sail family of operating systems. These are what Knuth used to do all of TeX, McCarthy LISP, and Stallman and Steele EMACS. Not Unix; and Linus hadn't touched a computer yet.
Executable program files were not much more than memory images; to run a program, the OS pretty much just mapped the executable image into your address space and jumped to the start. But when the program stopped, your entire state was still there, sitting in your address space. If the program had stopped due to a crash of some sort, or if it had been in an infinite loop and you had hit control-C to interrupt it, the program was still sitting there, even though you were staring at the command prompt. And the OS had a basic debugging capability built-in, so you could simply start snooping around at the memory state of the halted program. You could continue a suspended program, or you could even restart it without the OS having to reload it from disk. It was kind of a work-space model.
Translating into Linux-ish, it's as if you always used control-Z instead of control-C, and the exit() system call also behaved like control-Z; and gdb was a builtin function of the shell that you could invoke no matter how your program happened to have been paused, and it worked on the current paused process rather than a core file (which didn't exist).
The OS also had a built-in command to allow you to SAVE the current memory image back into a new executable file. There wasn't much to this command, either, since executables weren't much more than a memory image to begin with. So, the equivalent of dump/undump was really just built into the OS, and wasn't considered any big deal or super-special feature. Of course, all language runtimes knew all about this, so they were always written to understand as a matter of course that they had to be able to deal with it properly. It pretty much came naturally if you were used to that environment, and wasn't a burden.
Thus, when TeX (and I presume the various Lisp and Emacs and etc. that were birthed on these machines) were designed, it was completely expected that they'd work this way. Cycles were expensive, as was IO; so in TeX's case, for example, it took many seconds to read in the basic macro package and standard set of font metric files and to preprocess the hyphenation patterns into their data structure. By doing a SAVE of the resulting preloaded executable once during installation, everyone then saved these many seconds each time they ran TeX. But when TeX was ported over to Unix (and then Linux), it came as a bit of a surprise that the model was different, and that there was no convenient, predefined way to get this functionality, and that the runtimes weren't typically set up to make it easy to do. The undump stuff was created to deal with it, but it was never pretty, since it was bolted on. And many of use from those days wonder why there's still no good solution in the *nix world when there are still plenty of programs that take too damn long to start up.
vi had a much simpler structure and much more limited feature set. vim is obviously a much more capable/flexible editor and has scripting support, but is still predominantly written in C.
https://itunes.apple.com/ca/podcast/the-changelog/id34162326...
Compiling a few more seconds to page in. Ssh a few more. Everything on my laptop slows to a crawl as they fight for RAM with Atom taking up the way more than it should.
I know the answer, "buy more RAM it's cheap", from the Atom people, but then my browser people tell me the same thing. So do my interface people, and my kernel people, and by the time I say "okay" to all of them, I'm out of RAM again.
Application need to learn they aren't the only thing running. For some reason, my machine seems to be getting slower and slower no matter how much I upgrade.
Same here. Start-up time is important when the average user is hitting a web app or application but for developers? We open something once and then keep it open pretty much all day.
Unless I'm an edge case I'd suspect start-up time is mostly meaningless to developers with long running developer tools.
At the same time we always have tons of tools open at once so we need as much memory as possible because once things hit the swap the performance degrades terribly.
I probably spend more time hitting backspace when typing 'atom' than they saved in startup time.
I'll take the license.
I've been feeling this way for years, and I've mostly ascribed it to my faulty perception. I figured that if things simply aren't getting faster (i.e., they're not changing at all), I'm probably just imagining them getting slower.
But then I start wondering why aren't things getting faster...
I'd say buy some more RAM, it IS cheap and you never have too much.
What a compliment to the VSC team that after all this work, Atom still doesn't seem to match startup time, or more importantly perceived performance while editing.
It's a more interesting comparison since they're both bound by similar constraints, and building cross platform apps.
In the bad old days I once worked for a MS competitor where there were often complaints of unfair competition. Most often around how knowledge of closed sourced OS internals allowed optimization insights unavailable to others.
Not all MS devs are great for sure, but I'm inferring two things here. The VSC team is pretty damn good, and that IP and institutional knowledge from decades of investment in dev tools probably helps a bit.
Speed is only one of the issues–I'm actually quite happy with Atom in that regard. VSCode has a much more restricted API, and a more robust extension system. With Atom, I always felt like extensions started interfering with each other, and with the 'vanilla' experience. OTOH, I was exploring a few ideas, such as inline rendering of comments in markdown, and that's really only possible in Atom.
In other words, unlike the Atom team, VSCode didn't have to re-make mistakes Microsoft learned from 15 years ago...
And their change logs are great, thankfully.
I tend to work chaotically. I dive into a project and tackle whatever the problem of the day is. The result is a sprawling mess of reference code pulled up in different editor windows, documentation and google results spewing out over 4 browser windows and 30 tabs, and just as many terminals managing VCS, compilation, tests, etc.
It's not that I'm a messy coder or anything; it's just that when I'm focused on a problem then my concern is about that problem, not about the growing heap of reference material. The problem is particularly pronounced when working on web applications, where I have to handle multiple code bases at once.
Once I'm done, I'll become horrified by the state of my desktop and proceed to close everything.
The next coding session starts fresh.
So startup time is actually important to me. I have to say I'm annoyed by VSCode's startup time. It starts up to a state where I can hit the menu and start opening things very quickly, but isn't completely finished for another couple seconds. Atom's in a very similar boat.
I'm glad to see progress being made here.
For anyone curious, I made a quick gif comparing startup times on my machine for Sublime Text 3 (Build 3129), Atom (1.16.0), Atom Beta (1.17.0-beta2, the one mentioned here), VSCode (1.11.2), and VSCode Insiders (1.12)
https://media.giphy.com/media/3ohzdTHkfj5ISAAPq8/source.gif
I should mention - my ST3 is heavily customized (28 plugins), while Atom and VSCode are absolute stock.
Just a lot "snappier". Startup is not much of a problem, I don't open ordinary files in VSCode or Atom by default (I use gedit for that)
Note that this hasn't shipped on stable yet, but is available on 1.17 beta.
The article mostly lists the various problems and associated optimizations. Concise & nice read.
Work from the CRIU crew started getting upstreamed almost exactly four years ago, breaking some initial resistance to the tech needed for CRIU- https://mobile.twitter.com/__criu__/status/58727373960931328... https://criu.org/History
It's interesting the breakdown in sell- CRIU is a swiss army knife of a tool, whereas Snappy Start and V8 Snapshots seem targetted and marketted largely towards fast "initialization" concerns.
The Emacs dumper dispute https://lwn.net/Articles/707615/
There are a few more issues, Emanuel Quimper summarized in: https://equimper.github.io/2017/02/25/why-i-moved-away-from-... explaining why he moved from Atom to VSCode in detail.
A month ago there was an interesting submission in favor of Sublime Text 3. Mainly because of its incredible responsiveness: https://news.ycombinator.com/item?id=13928752 by Tristan Hume, comparing Vim, Spacemacs, Atom and Sublime Text. I highly recommend it.
My workflow now looks like this: VSCode+Plugins replaces my zsh+tmux+vim toolchain when running on AC. On battery zsh+tmux+vim provide VSCode+Plugins functionality with less beautiful gfx but unmatchable battery lifetime.
The zsh+tmux+vim toolchain is heavily customized, though: https://github.com/rscircus/dotfiles
90% of the "innovation" in application programming is just an exercise in combinatorial virtualization.
(That said, it's not all bad. The browser has had a remarkable and wonderful effect on GUI application architecture that probably wouldn't have happened elsewhere.)
For example, starting Atom cold on my medium-sized project (by running `atom .` in the base dir) takes 6 seconds. If I close all windows but leave Atom running, do something else for a while, and then run that command again, it takes 3 seconds.
(That's still a disturbingly long time... but it's quick enough once launched. I like Atom enough that I can live with it for now.)
Edit: just tried the beta. It's a little quicker: 4.5ish seconds and 2 seconds. Still OK for long-term coding but too slow to be $EDITOR for things like `git commit`.
V8 Caching Mode, V8 caching strategy for CacheStorage
https://v8project.blogspot.ca/2015/07/code-caching.html
Edit: Not the same as snapshots mentioned in the article
Most of the time is spent doing fine collisions with shapes coming out of an R-tree to build up a connection graph.
For the vast majority of designs, it's fast. For very dense, imported designs, it can get slow. (No two tools keep data in the same form, so translating leads to inefficiencies.)
Maybe now we can have actual source code protection?
That's what it really comes down to. From what I can see, it's mostly an emotional response. I'd be surprised if code/binary obfuscation is a net win in general.
I've had to disabuse developers of this idea. One distributed binaries that weren't quite valid but ran on the CLR, though not Mono. A 10 line script was enough to remove the invalid sequences. What did the developer gain in this case? An extra build step, undoubtedly more than one bug, and in the end, no "protection".
Better than relying on memory snapshots.
Really I just want my licensing code to not be in a text file so people can't just load up notepad to evade it.
ls ~/.atom/packages | wc -l # minus 1 for the README
Have you checked Timecop? (cmd-shift-p, "timecop") Might be a few you can do without.Cool article regardless
If many people believe this is the case, why not? All articles get some common types of responses based on the topic, this is just one topic/response combo that you happen to disagree with.
>It's a trade off: performance for a cross platform JavaScript app development.
It's an unnecessary tradeoff.
If a single developer can create ST from scratch for Windows, OS X and Linux, then surely GitHub or Microsoft (for VSCode) can create a cross platform native set of UI components in C or C++, wrap them, and have the rest of the development (plugins etc) happen in JS (to keep the familiar language, easy access to npm modules, etc).
This seems destined to go the way of emacs. This is always what happens when an idealistic perspective wins out over a practical one in a development team.