That "killer" course had a semester-long, multi-phase project that was a kind of symbolic calculator (with sets and operations on them). It was maybe a couple hundred lines of LISP . . . so I wrote a LISP interpreter in Pascal (with a garbage collector, natch), then wrote the meat of the project in LISP, embedded in the Pascal source. Was the only person in a class of 300 or so who completed the project that semester, but the professors were not amused with the approach I took. Can't imagine why :-)
I've never used a functional language in production, but I've stolen LISP techniques and used them in quite a few products. I never felt frustrated by the fact that I couldn't use LISP or Scheme or whatever in <product> and had to use C/C++/Pascal/Java/C# instead, but have seen it happen in other engineers (notably when the Apple Newton switched from Dylan -- an object-oriented variant of Scheme -- to C++; there were a bunch of forlorn looking ex-Scheme hackers wandering the hallways, clutching copies of the C++ Annotated Reference Manual and trying not to cry).
Those were the days.
Lisp can't help you if you're too smug for your own good.
Disclaimer: I'm a Lisper.
You can love lisp, Smalltalk, and all those beauty languages, but you should never stick to a single language. NEVER. Go to another, look it's strenghts, and if it's a bit weak on some sides, try to use the nice techniques you learned back to make it better.
To solve any problem, you can use different languages. Of course would be nice to use the nicest languages, but in some contexts they're not the right tool, and in others, you should have to consider outside factors like, how many people will maintain that software. All of them knows how to use the powers of those nice languages? Also, do you think would be easier to rotate people on that project using those languages instead of another ones?
Everyone can learn to use some tool, but experience using others may help you to discover better ways to use new tools.
....I hope to one day have my career destroyed as badly as this.
The feeling I got when learning scheme was that of liberation. People have that epiphany all the time, but with different languages and techniques. For me it was scheme.
And of course, I could never be bothered with syntax. If something becomes too verbose in scheme, I write a macro to simplify it. The only thing I need to know is whether something is a function or a macro. After 3 years of writing python on and off, I still manage to mess up syntax unless I think very hard before writing.
I am however not a programmer, and of average intelligence. In fact I am on the very top of the bell curve, looking down on the rest of you.
List comprehensions, to me, are what makes Python a productive (or the most productive) prototyping language. It allows me to think and program in mathematical relations without much fuss. Add to that nice sets and dicts (needed for asymptotic efficiency), and I can easily forgive it that it is built on an everything-is-an-object paradigm (there are few things I detest more than OOP).
I've never truly understood why you need the meta-capabilities of LISP. I much more need a syntax that does not hide what happens (procedure call, list indexing or at least "indexing"). Macros amount to code generation. Code generation is mostly bad, it typically means the problem wasn't thought through, and that there is a lack of clearly defined building blocks. So far I've only ever generated a few C structs and enums, but I'm not sure it was an entirely good idea. It was super easy to generate from Python, anyway.
* Multi-paradigm programming (functional programming in the immutable sense is not dominant, the Lisp OOP system is top class, mutable state can be everywhere)
* Rich looping mechanisms that don't require tail call contortions
* Explicit typing that leads to optimized code and compile time warnings! (https://news.ycombinator.com/item?id=13389287)
* Warnings at compile time (not run time!) about things like undefined functions/vars/wrong args/unused vars (https://news.ycombinator.com/item?id=14780381)
* A pretty good set of libraries et al that (once you set up quicklisp) are just a function call away from trying out (https://notabug.org/CodyReichert/awesome-cl)
* (edit: one more since I like it a lot) Out of the box the Lisp system contains features you have to get from IDEs in other languages like breakpoints, tracing, inspection, code location questions like "who calls foo"... of course working with that system via emacs or something is nicer but it's basically all there in the base system (http://malisper.me/debugging-lisp-part-1-recompilation/)
"Lisp" has two meanings: (1) ANSI Common Lisp; and (2) the family of programming languages to which ANSI Common Lisp belongs, and which was called "Lisp" many years before ANSI Common Lisp was conceived; that family includes many other languages with quite a bit of variety, including Scheme, Clojure, INTERLISP, T, kernel, picolisp, *Lisp, (the early versions of) Dylan, and many others.
Paul Graham and Robert Tappan Morris, two of the founders of YCombinator, love Lisp/Scheme. They got rich by selling their company Viaweb to Yahoo. You guessed it: Viaweb was written in LISP and Paul Graham believes that this was their secret weapon:
> http://www.paulgraham.com/avg.html
Also the software that drives Hacker News is written in Arc, a Lisp dialect that was invented by Paul Graham.
So Hacker News and YCombinator are indeed at least historically very attached to Lisp/Scheme.
EDIT: From the years being here, and having played with LISP myself, it's a language that appeals to a certain type of programmer, who claim to find it very productive compared to other languages.
And everyone else hates it.
So you pick a LISP language you're severely limiting your hire pool.
One is macros. Being able to transform code before it is run using the language's built-in data structures provides a solution when the language just doesn't have the abstraction you need. Used properly, this can be invaluable. Used improperly, of course, it can make a mess.
The other is REPL-driven development. Most languages have a REPL, but they don't really embrace it the way Lisp does. I find this frustrating. Why is my editor on my PC not talking to the app running on my Android phone and letting me see its state and make changes in real time?
All my opinion, of course: The time (cost) required to create such easily available introspection is too high for the comparatively small gains. REPLs in modern languages (when even implemented) are just so often a completely separate mode of operation.
The fundamental structures underlying Lisp are quite solid. I like the distinction Phil Wadler makes: Lisp was discovered while languages like Python were invented. Everything in Lisp fits well together. Other languages in this vein are Haskell, SML, OCaml, etc. You can spot these languages because all of their features are usually built from the primitives of the language. An invented language forces the inventor to try and remember how a new feature may interact with all of the others... and sometimes it doesn't work out.
Also Perl 6 is written in itself.
They still haven't caught up. I don't find any other language that gives at least 3 of these features
- Fully interactive development -- i can recompile/redefine a function while the code is running
- Fully interactive OOP development -- i can recompile/redefine a class while the code is running
- OOP system based on multiple dispatch/multimethods
- true metaprogramming done using the same language's syntax, not a special, cumbersome lib.
- execution speed on par with the JVM and sometimes approaching C speed.
Reaching for a "safe and familiar" procedural/imperative language is like most people reaching for a claw hammer when they wish to hammer and nail something in. To most people, it's the same, they are just hitting something.
To a pro, they know when they reach for their claw hammer, a sledge hammer, a mallet, a ball pein, club hammer or cross and straight pein.
If you have a text file that you just need to slice and dice. The tool to use is awk/sed/tr/cut, or perl. Got rules? prolog. Got massive array data? APL. Got multiple massive array data that you wish to combine and separate in various ways? SQL. Surely, you can use python for all those things. But should you?
Arguably, yes, most especially when several of them apply to the same process, rather than to separate pieces of data.
Lisp is quite practical for solving day-to-day problems, as long as you are a knowledgeable Lisper.
I don't believe PG's claim that Lisp made a significant difference in productivity for them.
Just consider that, while being a very beautiful language, Scheme is different to Common Lisp.
If you are faced with doing a production system, you'll feel the benefits of Common Lisp, particularly now in 2018 where the tooling and libs are much better.
Scheme can do everything that CL can do IF you add a ton of extensions and libraries and stick to a particular scheme implementation. So you would have a "custom" platform. Where as with plain-vanilla Common Lisp you would have all these features but in a totally standardized way.
At ILC 2002 former Lisp giant now Python advocate Peter Norvig was for some reason allowed to give the keynote address like Martin Luther leading Easter Sunday mass at the Vatican and pitching Protestantism because in his talk Peter bravely repeated his claim that Python is a Lisp.
When he finished Peter took questions and to my surprise called first on the rumpled old guy who had wandered in just before the talk began and eased himself into a chair just across the aisle from me and a few rows up.
This guy had wild white hair and a scraggly white beard and looked hopelessly lost as if he had gotten separated from the tour group and wandered in mostly to rest his feet and just a little to see what we were all up to. My first thought was that he would be terribly disappointed by our bizarre topic and my second thought was that he would be about the right age, Stanford is just down the road, I think he is still at Stanford -- could it be?
"Yes, John?" Peter said.
I won't pretend to remember Lisp inventor John McCarthy's exact words which is odd because there were only about ten but he simply asked if Python could gracefully manipulate Python code as data.
"No, John, it can't," said Peter and nothing more, graciously assenting to the professor's critique, and McCarthy said no more though Peter waited a moment to see if he would and in the silence a thousand words were said.
0: http://smuglispweeny.blogspot.com/2008/02/ooh-ooh-my-turn-wh...
If you also look at the pseudocode or the python code, it's not very Lispy. Python more or less is used on a level of an object-oriented BASIC.
Ultimately, what I really love are expression oriented languages, and the ALGOL/C and APL families have alternatives with a more succinct and expressive syntax (Perl, K).
One might also argue that Scheme is a poor man's Forth.
If you work in a complex or rapidly evolving domain, I cannot think of a better choice than Lisp. If you are doing a lot of text processing or writing CRUD apps, then you will probably not get much benefit.
or if you are creating a big system that needs to work reliably and be able to be corrected (patched) while it's operating
If we assume most of us don't really know what we're doing, that totally explains language preferences. We don't choose the best tools, we choose the best tooling. And best tooling is the one which we can comfortably fit in our minds, so advanced concepts are mostly ignored and the hype wheel constantly turns. To put it shortly - smart people write better and better tools, so the rest can handle bigger and bigger projects writing mediocre code.
Of course, as every programmer, I live in constant fear that I am part of the plumbing crowd waiting to be exposed.
You can get an awful lot done by "plumbing". Entire businesses like SAP are built on it. It can also be mission critical; in SpaceX, is the literal plumbing of hydraulic fluid and fuel flow unimportant? No.
I'm trying to understand the industry, as it appears to be(at least to me) different than what I thought was true. I believe it to be important if we're going to do better and there is a ton of metrics showing we should do better(percent of projects failing, percent on projects exceeding budget and time, percent of projects becoming unmaintainable).
If you could prove that only a handful of people is capable of actually developing software project pass the stage of 'piggy-backing' on libraries, that would probably distinctly change the way we develop software. Maybe we could prevent death marches better. Maybe we could improve our working environments so nobody has to crunch or have a depressing spaghetti-code maintenance job.
It doesn't mean in any way that 'plumbers' should/would be treated worse. If anything, I would expect the opposite.
Is the distinction between an aerospace engineer and aircraft mechanic "elitist"? Which would you prefer to have designed the next aircraft you fly in?
Plumbing is really what the vast majority of us do. With varying levels of skill, we glue various pre-written libraries and packages together with a bit of business logic, solving problems that a million others have solved before.
Don't get me wrong - I run a very small programming shop and I make judgment calls every day about whether to borrow or build. My operating system, hardware drivers, compiler, dependency manager, email server (etc.) I borrow these because it seems obviously practical and I have an appreciation for the complexity underneath (although I have some unkind things to say about hosting tiny apps on full-blown linux virts, the waste is unbelievable). I use Unity for client side development for games, which is probably the decision I'm least happy about, but I simply don't have the bandwidth to deal with the monstrous complexity of client-side programming (especially in a gaming context).
Frameworks are generally bloated monstrosities that conceal performance problems behind walls of configuration and impenetrable cruft that has developed over decades of trying to make the "out of the box" experience configurable while pleasing myriad experts. They do more than one thing relatively badly, and the engineers who work with them often haven't developed the ability to deep dive into them to solve real scaling problems.
You don't get simplicity, you never get zero-touch, and your learning when working with a framework often doesn't generalize, so you're basically renting a knowledge black-hole rather than paying down a mortgage on a knowledge library.
Anyway, that's my two cents on why I think having solid fundamentals is important, at least in my line of work.
Maintaining distinctions like this is necessary to have the right people do the right jobs. Some developers like being puzzled by hard problems and will get bored writing adapters for Java classes or connecting A to B in some set of Javascript frameworks. Others are motivated by seeing their high level design realized and don't like having to give too much thought about the lower abstraction layers. Giving these people the wrong jobs is a waste of time and money.
I don't think it is. Someone that is slapping together libraries from npm, but has no idea how to debug with gdb or use strace/ktrace/dtrace etc to diagnose problems with the resulting system, or does not have the skills to fix bugs or add new features to the "plumbing" - that person is not an actual software developer. There is a huge gap in skills, knowledge, and as a consequence the capabilities of these two camps of people.
I think he meant fidgeting with pre made solutions hoping they will work, instead of having principled ways to craft things.
If SpaceX were to "plumb" as he meant it, they would use of the shelf modules and ideas for quick results. Instead they actually designed their rockets mostly from scratch (very rare in space, where it's often a rule not to go off what's provent to work).
I know I'm supposed to think that that is bad, but I can't come up with why it would be.
> Entire businesses like SAP
Companies like SAP mostly waste people’s money by extracting huge sums from municipalities.
And it seems ridiculous at times to even throw around the title of “Software Engineer” when the field has no standards of certification or regulation like other engineers. The only distinction from the programmer and engineer is the engineer makes architectural decisions, and the larger the scale the more accurate the title. “Plumbers” are cheap and no one cares if you fire them.
Perhaps, but you are the person who prepended 'mere' in front of plumbing.
You are correct - however, it doesn't mean that this statement is necessarily wrong.
Usually when I see a statement that reeks of elitism, I immediately assume lower probability of it being true, because elitism of a statement correlates with falsehood - but it's worth remembering that this correlation is not absolute, and sometimes (although rare), elitism is indeed deserved and true. And in this particular case, out of my personal professional experience, elitism is absolutely deserved. There are a lots and lots of software engineers out there that can't or just won't fit more complex ideas into their minds.
And yet, you are of course, correct: these developers can, and will, do good work. They can have different skills, like knowing the users, or creating beautiful animations, or having a great game design sense. These developers don't need to be looked down upon. And yet it would be very useful for all of us to acknowledge that these developers think in a different way and require different tools - they're not "worse" than "real" engineers, but they are different.
However the market does't care for that. The market prefers short term gains over long term gains. Perhaps we can blame wallstreet? Due to the demand for short term gains, the ask from most developers is "how fast can you build this" not "how can you build this to be most efficient and cheapest in the long run and lifetime of the product" To move quick, developers must then employ abstractions layered upon abstractions.
The short term winners get lots of money for demonstrating wins quickly. The losers conform to keep up.
I agree with this. For example, Sun was sold for $5.6B in 2009. [1] While Skype was sold for $8.5B in 2011 [2].
Sun had Solaris, Java, SPARC, and MySQL. Skype was a chat tool.
Even today many popular databases find it hard to get billion-dollar valuations, while multiple Social Companies has done it.
The market doesn't care about core CS. It cares about monetary gains.
[1] https://en.wikipedia.org/wiki/Sun_acquisition_by_Oracle [2] https://en.wikipedia.org/wiki/Skype
Most jobs working with software development is about creating business value. Either as an end-user product that your customers are going to use or with internal tooling that will help the business have more access to data, streamline processes, increase efficiency, etc.
This "no true Scotsman" approach to software development is actually quite funny after a while being a professional, it's a huge industry, there are terrible companies, there are terrible managers, product managers, etc., but there are also great ones. You can work for good ones if your current gig is mostly being pushed around by unreal expectations from your stakeholders.
The demand is not for short-term gains, how can you justify that it's going to take a year to build your perfectly architected software if the business really needs that done in 3 months and you assess it's quite doable if you decide on some constraints? Your job as a professional engineer is to be able to find ways to do your work as best as possible given constraints, to design something to be improved on over time, to communicate with stakeholders and, given your area of expertise (software), give valuable input to the business decision so they can be the best as possible at that moment.
Seeing software engineering as some grandiose goal by itself is quite wrong, software exists MOSTLY to fulfil business purposes.
It's not about "conforming", there is software that is fun to work, that are intellectual challenges by themselves but that really have no way to be justified on a business level.
This defensiveness against "business" is part of a mindset that should be broken among engineers, we should embrace we are part of a much larger process, not that we are the golden nugget and the top of the crop at a company. Our work is to enable others' work.
I couldn't answer this question accurately. I can't even give remotely accurate time estimates for projects larger than "build a CLI tool to do this one thing," much less give an informed estimate about the tool's TCO. I feel like I'm just floating down the river, incrementally improving upon the stuff that we've already built and that nothing we planned to accomplish ever is.
I'm sure lots of people here have executed their Grand Vision for a project. But I'm also certain many of us never have.
When he says "...knowing Lisp destroyed my programming career" he just means that at some point in time he switched to other things. There was no "destruction". It was "a pivot"-- to use HN lingo.
I think anyone who makes programming (let alone programming in a particular tool/language) the absolute focus of their career is in for a major disappointment. The OP was NOT crushed by his realization. He just moved on and appears to have been very successful regardless. Not a big deal unless one is obsessed with Lisp.
The sales pitch is clear: don't become a better programmer, get a better toolkit.
I have been quite fortunate to have come into computers before this cloud of marketing madness overtook us. I got to watch the layers roll out one-by-one.
Honestly I have no idea how I would learn programming if I had to do it again today. It's just too much of a byzantine mess. Hell if I would start with the top layers, though. I'd rather know how to write simple code in BASIC than have a grasp of setting up a CRUD website using WhizBang 4.7 When you learn programming, well you should be learning how to program. Not how to breeze through stuff. Breezing through stuff is great when you already know what's going on -- but by the time you already know what's going on, it's unlikely you'll need the layer or framework. (Sadly, the most likely scenario is that you've invented yet another framework and are busy evangelizing it to others.)
This guy's story strikes me as poignant and indicative of where the industry is. They don't care if you can solve people's problems. They care if you're one of the cool kids. It's nerd signaling.
What is a "real programmer", anyway? Is it knowing how a CPU works? Managing memory? If you rely on the garbage collector, do you really know what you're doing? If you write a Rails app without fully understanding HTTP, are you just plumbing?
Does it matter?
The reason we build tools and abstractions is to allow us to accomplish higher-level tasks without worrying about lower-level ones. Does it help if you understand lower level ones? Sure. But for millions of apps and websites that aren't Facebook or Hacker News or Firefox, the only problems are high level.
It's the goal of software to provide more and more features. The problem is, we usually achieve those features by abusing abstractions we are using. Once the problem becomes apparent, somebody writes a library, which allows us to write a couple of times more crappy code before everything collapses. Rinse and repeat.
Since we are unable on scale to choose right abstractions when necessary, we quite often get ourselves in situations, when it's actually preferable to rewrite everything using lessons learnt. Rewriting comes with its own set of new problems and cycle is complete.
As a result, we are fundamentally unable to reuse already written code on scale, crumbling development and putting the whole industry in constant early-stage.
The business doesn't care about your toolset, probably. What they care about is solving a problem.
I've met no small number of very gifted, very creative developers that had this same mindset -- didn't give a crap about the Cool New Language or pure CS, but really DID get excited about building connections and features within existing systems to solve business problems.
These two visions of development are in tension. Few folks can get jobs writing Haskell or Lisp, but there are LOTS of jobs for .NET developers.
Neither path necessarily makes a better developer, though.
That said I found something weird, sometimes, and with a little bit of adequate libraries (guava for instance) I enjoy doing some Java. It's verbose, way more than lisp, clojure, kotlin (not even mentionning mr Haskell of course). But I find a little pleasure in doing code there. It's manual, it requires doing a lot of things but even though its slower and more "work" it's another kind of stimulation, that I think is one large factor of people doing code in subpar languages. They're just happy doing things and solving things their own way.
Some say that lisp and haskell can't be mainstream because their power is best fit for complex problems, and I think that hints to my previous point. People who need more than mainstream have the brain power and desire to solve non mainstream things.
ps: about the plumbing thing, you might have heard that MIT switched to python exactly for that reason. I was and still am stumped that the people would brought SICP decided that plumbing was the way to go.
There are plenty of software that is not about taking pre-existing packages and gluing them together, but rather involves construction from the ground up. Building a CRUD app using an existing framework, and an existing rest API with little to no custom business logic is plumbing.
Using sci-kit for your ML app is plumbing. Writing your own novel ML/deep learning algorithm, shoving the data in and out of GPUs is not plumbing.
Is doing front end not actual software development? What does that even mean?
Are you considered a real software engineer if you can write code in a certain language? .. Or does it mean you are really good at O(1) problems?
I think we should all agree that software development is a team effort, it requires vast knowledge and skills that different individuals bring to the table.
Doing CSS adds just as much value as writing the backend API.
Here is a good analogy. For an aircraft to function, pilots, aircraft mechanists, aerospace engineers all come together. Does that mean one is doing "plumbing"?
Certainly not, they all add value.
In today's environment, 'pure programming' is a small fraction of what needs to get done.
As a long-time programmer, the most leverage I have had is when my work connected to some business objective and produced a result. My engineering background, which emphasized a "problem-solving attitude" helps.
I think the average developer can do this too if they'd just get out of the programming echo chambers and trust their gut.
How true, the 50% of jobs I see are actually semi-skilled overpayed jobs which won't last. In many cases you configure somebody's else product and you manage it at best, it won't last. It is already happening.
Well, the cheapest, safest assumption is that your skill level is about average and the majority of the programmers you 'll meet are going to have the same kind of skills as you.
Yes, well done abstractions are how we keep large projects manageable. No one can learn all of them at once. Neither those writing "plumbing" nor those doing libraries.
I didn't write Auster[0] for me, I wrote it because other people needed a tool and it fit the parameters. I'm not writing Modern[1] for me, I'm writing it because there's a hole in the ecosystem that somebody has to solve, and I'm a someone.
Probably the best advise is to learn as much as possible, but focus on basics. I feel people often mistake knowledge for skill and that harms them in the long run.
If you learn a concept, you can use it in anything you create. If you learn a framework, you will have to learn a new one in 5 years. Focus on transferable skills.
Hunter-gatherers on capitalist democracy, Capitalist democracy on hunter-gatherers, China on India, India on China, US on India, India on Iran, Iran on Israel...
You get the picture.
Perhaps what it really means is "communications failure: exception thrown in cultural assumptions".
Now I use Lisp nearly everywhere in a form of a small .NET runtime module.
Lisp is excellent at templating tasks. Just for comparison: StringTemplate for .NET is a whooping 400 kB of compiled binary code while my implementation of Lisp is just 35 kB (!). Sure enough, Lisp does the very same thing as StringTemplate, but in just 1/10 of the code. There is more: Lisp is Turing-complete while StringTemplate doesn't. I can add a .NET function to Lisp environment and then call it in my template. I cannot do that with StringTemplate. I repeat, this is a 35 kB Lisp David vs 400 kB StringTemplate Goliath.
Isn't Lisp beautiful?
But wait, there is more. Lisp is excellent for natural language processing because you can intersperse a human text with Lisp. Suppose the first % character enters into Lisp mode and subsequent % character exits back to text. The example of Lisp NLP would then look like:
"Hello %name%. It is time to do some math. Assume A + B = %(+ a b)%. What is the value of A when B is %b% ?"
You then fill Lisp environment with desirable values: env["name"] = "David";
env["a"] = 15;
env["b"] = 85;
Evaluating aforementioned template will produce the following result: "Hello David. It is time to do some math. Assume A + B = 100. What is the value of A when B is 85 ?"
With that tool at hand, you can build a user-configurable quiz system in no time. Or maybe a configurable Virtual Assistant. Or a chat bot. Or anything else, you name it.Lisp is small as the Universe before Big Bang. Nevertheless it provides nearly endless possibilities when you have a need, a fit and imagination for them.
"David has %n% apple%(when (> n 1) "s")%."
You can move that rule to function etc. You can tweak that for the language you use in an endless number of ways.Are more details about this magic thing available?
This example shows generation (with some very light extraction: munging a copyright header from existing files to stick into a generated file):
http://www.kylheku.com/cgit/txr/tree/gencadr.txr
gencadr.txr generates all those caar, cadr, caddr, ... functions down to five levels deep, in C. Plus it generates the Lisp defplace definitions for all of them so they are assignable, as in (set (caddar x) new-value).
That part could have been done in Lisp rather than textual generation, but I said what the heck.
Generated results:
http://www.kylheku.com/cgit/txr/tree/cadr.c
http://www.kylheku.com/cgit/txr/tree/share/txr/stdlib/cadr.t...
Well...C# String interpolation is.
Having said that, C# string interpolation is not user-configurable, and you cannot realistically use it to generate, say, 300 lines of highly sophisticated text that is a subject to frequent modifications during product development.
One of the popular tasks for a larger scale templating is email generation. Text templates, HTML templates, for signup, for email confirmation, for password reset, for subscription renewal, for ... you name it. As you can see, it quickly goes wild. Not sure one would prefer string.Format() or C# string interpolation to do all of that.
I would take this as a the thing to remember from this. When you are too angry at something, you cant learn it. I have seen the emotional refusal to learn new inferior thing (or read comments that amounted to the same) many times already.
It is something to be aware of and avoided.
I realize not everyone will have that option, but I strongly believe being strongly opinionated about what I will work with has helped me in the long run by ensuring I've actually worked on things where I could feel motivated to deliver, and letting me focus my time on getting better at the technologies I do enjoy working with.
Of course that requires ensuring you build skills in areas where you can find jobs, and that you avoid jobs where you don't get to pick the technologies you're prepared to work with.
I hated Java as everyone was supposed to in school among my peers. Java was for lesser programmers. But, there was interesting job in Java available and after using it, I liked coding in Java better then previous languages I knew (C, C++ mainly).
I liked Java pretty fast, but learning to like JavaScript took more effort. Imo, it was very very worth pain. The more pain, the more the learning process is worth - it hurts because you are learning something fundamental and getting new habits you miss.
I've been playing "avoid those jobs" for a while now and it's getting tiresome. There is a HUGE prejudice against people who never drank the OOP kool aide.
If anything, remember that Lisp, Java, Brainfuck and hundreds of other langs are all Turing-complete and therefore equivalent to one another when it comes to computability.
This is true in a very literal sense. Emotions appear to have a very significant effect on how well we can memorize and recall things:
The new thing sucks. Or am I having a knee-jerk reaction to something unfamiliar? Maybe I'm the asshole.
The new thing falls short of the old thing. Or does it have different goals from the old thing? Maybe I'm the asshole.
Whoever made the decision to put the new thing in my path are out of touch and don't understand what I do or how the old thing helped. Or have they seen something that I don't see yet? Maybe I'm the asshole.
Sometimes I'm the asshole. Never entertaining the possibility that one is the asshole is what makes one the asshole.
So, the secret is to be alone in a room, so you can swear freely. Walk around when angry and complain to yourself. But always go back and continue using it and eventually, that phase will pass. After that you will build new set of habits that work around disadvantages and use advantages.
I experience a similar thing Elixir at times. Even as a fervent "right tool for the job" guy it's hard to have in one hand something that elegantly solves just about everything you've experienced in your career...and not be able to use it.
Between Elixir and Python, there's not much of a need for anything else on a server these days.
Fortunately for me, my rage at Fortran came after I mastered it.
Since then functional programming has picked up a lot of steam again, and I'm pretty sure having been a lisper for 30 years gets you good jobs. If not in Lisp, haskell, F#, clojure or whatever...
This is the flip-side to age-discrimination - it's much less of a problem in my industry. Which, as a 37-year-old I find comforting. But when you're stuck working for 50-something pathologically risk-averse architects who built their careers on the over-hyped tech of the previous century and refuse to listen to new ideas, it's quite frustrating. I'm basically waiting for the previous generation to die/retire so I can have the chance to build software the way I think it should be built.
I may just leave the field altogether and hack on Lisp in my spare time :)
And by the time you'll be able to do that you'll be the 50-something who keeps using functional programming while the 37-year-olds want to use dilithium crystal programming :p
Defense/aerospace is pathologically risk-averse. It's not just your set of architects.
If you want them to listen to new ideas, propose it on something that is not mission-critical. Prove it can work there, and prove the benefits. After you've done that (more than once), they may be willing to at least listen.
I have used Common Lisp for low-level concurrency and IO code, and as a target language for transpilers, and the most indispensable construct for those applications is GOTO. The only other substitute is a good compiler with guaranteed tail-call optimization. Some people love to point out how you don't need Lisp because "modern" dynamic programming languages have borrowed this feature or that. Ok, but none of those languages (can anyone provide an example? I cannot think of any) has GOTO or guaranteed TCO. If you want to do systems programming in a dynamic programming language, the choice still comes down to either Common Lisp or Scheme.
If you want to be a Lisp programmer then you pretty much have to be good at other things, too, or you may starve waiting for that next Lisp job to come along.
Of course, you can always just write your own products in Lisp.
1. Many JavaScript devs were focused on 'getting job done' in their current workplaces, where they were using jQuery-related stuff mostly.
2. After some time they want to change job.
3. They realize that the js market has changed (or maybe they were aware of it, but they just didn't care before/didn't have time to take a closer look before).
4. They try to keep up and learn tons of stuff to be employable as js/front-end devs (npm/webpack/js frameworks).
It's just a personal opinion. I was at that point in my life before too (and realized that it was my personal cause of my js fatigue)
Perl, my first and only love.
https://www.amazon.com/Building-High-Integrity-Applications-...
"With a secret weapon like Lisp in my arsenal in 1986 I could blow my competition out of the water with one hand tied behind my back and holding a martini in the other."
A present-day JavaScript practitioner will have his left hand busy trying to figure out what's this week's fashionable way to pass around some data in this month's fashionable framework and his right hand busy trying to make npm and Webpack work, leaving no time for martinis or blowing anyone out of the water.
If JS devs were not able to focus on bringing value they wouldn't be able to hold jobs, build products nor profitable companies. If JS was so impratical there would not be so many people choosing it over various alternatives for building GUIs, games, websites, WebGL/WebVR, back-end services and so on.
If there is a warning in this story, it is a warning to people looking down on rising languages and frameworks. Those are the people risking to become irrelevant because of a mix of smugness, gatekeeping and ivory tower syndrome.
This attitude is exactly what lost Ron Garret when C++ and Java happened. This also prevented him to see (before the very end of his programming career) that the competition was actually doing just fine with the new tech. You and grand-parent totally missed the point of this story.
PS: You don't make npm "work"
Although this low barrier of entry sometimes lead the users of the language to bad results, I especially like Javascript because of its welcoming nature. One can even argue that it democratises the software creation process on Earth by allowing anyone to quickly start tinkering with it and see the results immediately, most likely in a web environment.
Also, although it definitely has some quirks and may cause some frustration to starters, it also allows an experienced developer to be expressive and quickly build products, and may allow its users to enjoy their martinis.
Very true. Working with Javascript these days feels like jumping from one hoop to another with rarely getting any work done.
In my view, users of different languages might benefit from this piece as much as JS devs. This behavior against JS devs cause unnecessary hostility.
In addition, Javascript may actually be "too big to fail", and it is actually in a very different state than Common Lisp in this sense. What I mean by that is being "the language" that is used in browsers and having the largest registry (npm).
I am not trying to attack, just curious about why do you think it specifically applies to JS devs.
I faced two problems after K:
1. The whole software engineering world starts to look like a ridiculous house of cards; When you're used to seeing implementations of fast and efficient in memory databases in ~20 lines, or an npm-style package manger[1] in less than ~400 lines (comments included). Modern software engineering practices enable and encourage solutions that are inefficient in many respects. Maybe it cannot be better at a large enough scale, but it affects and infects the smaller scales as well. It is extremely frustrating.
2. Getting actual work done requires interfacing with the world, which isn't quite up to the k/q/apl standards of terseness and efficiency -- mostly "accidental complexity"[2]; So, e.g., if you have to have web presentation, or work on import/export, they dominate your effort due to accidental complexity that cannot be removed. Which is doubly frustrating.
[1] https://github.com/yang-guo/qp
[2] https://www.quora.com/What-are-essential-and-accidental-comp...
HN discussion(2010) : https://news.ycombinator.com/item?id=1041500
http://www.softwarepreservation.org/projects/apl/Papers/Elem...
I'm not sure what you mean. C? C++? Java? Perl? Perhaps Python?
You can't write C++ or Java like you did 20 years ago (of course, you could but you shouldn't). Both languages and their whole ecosystems have evolved tremendously in 2 decades.
The C and Perl style OTOH has changed less. A programmer of those language from 20 years ago that is transported to the present day will have much less problems fitting in than a programmer of C++ and Java would.
Perl was launched in 1987.
Python was launched in 1991.
Java and Javascript were launched in 1995.
C# was launched in 2000.
So the newest out of the popular languages is already 18 years old.
You don't need Fortran or Cobol or Lisp to have been able to do this... It's already almost 60 years since that time.
C++ doesn't qualify, to my mind, because the language has had many iterations and also because, as the post's author points out, writing C++ in the early 90s was not always a pleasant experience.
Python and even Perl are good contenders, as they have both been relatively stable languages. I'm not sure Java counts - Java 10 is quite different to Java 1.0, not just in terms of language features but often paradigms too (think 5's generics and 8's lambdas).
As for C# - I've never done .NET, so I'll have to take you on your word.
I can only imagine how it would be for anyone using Lisp. The power there is so much, going to something like Java from there would feel down right suffocating.
Going from Perl to Python in my case largely felt like trying to a run a marathon in chains.
It’s not a magic bullet. But what you learn can be used in other languages, especially things like Map, Filter, and Reduce.
I totally recommend experimenting with Clojure or Haskell and implementing a project with a pure approach & immutable binding, or as much as you can.
If Java is a tuba, then maybe Python is a trumpet. They're two very different instruments, but they share a lot of the same underlying intuitions. In fact, I would say that C-style languages are probably equivalent to the collection of brass instruments. There are major variations from trombones to trumpets to french-horns, but not as large as you'd think.
So if you've only played tuba, what happens when somebody hands you a violin? You may understand "music theory" (or computer science) pretty well, but actually sitting down to learn the violin will give you strange insights that aren't apparent on brass instruments. For instance, violins' strings are tuned perfect-fifths apart -- certain scale patterns naturally jump out at you. In the same way, let's compare Haskell/Idris/R to stringed-instruments. It's easy to switch between them when you realize that a viola is a cello is an upside-down bass-guitar. And at a higher level, learning violin and tuba together will make you better at BOTH instruments! As a bonus, playing music is more fun when you can antipate the habits and difficulties of your peers' instruments.
As a side-note, I think Lisp is either a piano or a theremin. I'm not sure why.
Anyway, I think learning each "family" of languages is a worthy aim for every engineer. Here are the groups that I personally hope to master over my lifetime:
- Hard : Assembly
- C-Like : C, Java, Python, JS, PHP, etc.
- Pure Fun : Haskell, Idris, OCaml, Elm, etc.
- Distributed : Erlang, Elixir
- OO : Smalltalk
- Lispy : Scheme, CL, Clojure
- Stringy : BASH, Perl, SNOBOL
- Code-Golfy : APL, J, K, AntLang
- Stacky : Forth, Joy
- Data-Driven : R, Mathematica, Wolfram Language
- Logic-Driven : Prolog
This list is long, but don't let it discourage you! Every family has plenty of magical ideas that make the next one easier to master.Best of luck on your journey!
Of course you are. Everyone is.
There are maybe 100 different language/style/technique boxes you could check. You might be able to check three. You look around here, and you see that, for every box, there is someone here who can check it. But no one person here can check more than 20 at most, maybe no more than 10.
Don't look at the entire site and feel inadequate. Don't even look at some of the people who have 40 years experience and feel inadequate.
But yes, as others said, look to learn things that are different from what you know.
MIPS(how processor works),
C(how memory, pointers and syscalls work),
Ruby(fully object oriented),
Scheme(simplicity, composion, code-as-data),
Haskell(abstractions, side-effects management, immutability, types, FRP),
Erlang(distributed & concurent programming, message based communication, immutability),
Prolog(logic programming).
Every od these feels differently and operates at differently level of thought and abstraction.
It's good to just get the feeling of these languages as it will give you better perspective.
I tested a lot more but these I can recommend for expanding horizons.
EDIT: formatting
Lisp is not a pure functional language, by the way. It's very multi-paradigm and has one the best object and exception systems ever designed. Not only are all these excellent features available to you, but they are available to you at the metalevel as well.
https://groups.google.com/d/msg/comp.lang.lisp/GMx6gjESVZw/-...
I have only minimal exposure to Lisp, Prolog, and ML in a programming languages concepts class in the mid 90s.
I really enjoyed using Prolog in the class and had an opportunity to use it professionally twice.
This is very true, especially now. With all the available tools, and libraries, and infrastructure tools, and ..., and ..., and ... the choice of a programming language hardly matters anymore [see caveats below]. What matters is how comfortable you are working with a particular language and set of technologies you chose, and if your choices do not impede you.
[caveats]
Of course, you you want to crunch numbers, or work in a memory/CPU-constrained system, or develop avionics software, then your choice of programming language matters. But let's be honest: the vast majority of us don't do any of those things, and you can whip up a geographically-distributed, auto-scaling cat meme delivery system in any language in a matter of hours.
No matter how powerful the technique , it's way too low level for the modern world.
Syntax is way down the list of difficult things in the craft of programming, compared to properly understanding problem domains, internalising techniques for robustness, etc.
I think it matters since languages like clojure are everywhere, including at Google.
Thankfully, I took some AI courses, which came with LISP and I was so excited to use it everywhere.
Unfortunately, however, doing basic I/O, networking (this is around 2006) and testing was too complicated for me so I dropped it again in favor of C++ in grad school (performance was also an issue, we needed to crunch a lot of data fairly quickly).
I would love to go back again and give it a shot - any pointers on the best way to learn/start?
Peter Seibel's second book Coders at Work is also excellent.
Knuth (interviewed last) claimed that he hadn't really read it. And only one person interviewed did. Oh, also, there seemed to be a general disdain for IDEs.
It is trivial today. Many good libraries for networking, web servers, JSON, serialization, etc, etc. Many excellent Common Lisp implementations as well, some of them being very fast.
>any pointers on the best way to learn/start?
Read Practical Common Lisp, install "Portacle" (the Portable Common Lisp Environment) and be happy!
Even today, I feel like a lot of people could learn a lot from LISP and S-Expressions, both of which are powerful constructs.
(Note: I'm not saying you should use LISP productively, it's something you should learn for the heck of it, not to actually use it)
Dense and very complete into learning a Lisp. The one HUGE benefit is I see Racket moving into the most modern programming Lisp. Racket really looks like it has a future.
1) Concurrency and Parallelism (Without pull all your hair out) - http://docs.racket-lang.org/reference/concurrency.html
2) Great Documentation http://docs.racket-lang.org/reference/index.html
3) It will make you a better programmer in other languages
Racket's biggest problem - It's fun to use so everyone has their own solutions for many common problems.
Well, I do, and I have to say that it can be done. It isn't a road often travelled and it results in people making remarks about your language choices, but as long as my software works, its development is fun for me and my co-coders do not complain, I don't think anything is wrong with my setup. :)
Disclaimer: Not a Lisper
An informed perspective: https://www.quora.com/Programming-is-all-about-managing-comp...
I feels this person's pain.
Perl 1987, Python 1991, Java 1995 (cf. Wikipedia "first appeared" info).
But yeah, the "coming along", i.e. gaining traction, may be said in that order.
Minor nit, a very level-headed assessment.
Would you please elaborate?
big truth...
Between "GC, full numeric tower, CLOS, incremental development" and especially the ease of metaprogramming in Lisp, you may eventually lose the ability to perform the very programming tasks you have automated away for yourself, whenever you lose access to your own macros.
The true Curse of Lisp would be that Lispers are constantly de-skilling themselves.
I also don't particularly miss worrying about malloc/free in my programs, nor having to write multi-precision floating point arithmetic routines.
Bollocks. You'd be surprised by seeing how many Lispers actually have great knowledge of low-level. In fact it's almost a "fashion" in the Lisp world.
But the world is far messier than that and if, like me, you come from a systems or real-time programming background then Lisp basically did not exist and you were using C-based languages (or even assembler) that were close to the metal so you could code interrupt handlers, asynchronous I/O drivers and the like. Obviously trying to tackle those kinds of problems from a Lisp-centric viewpoint would seem slightly crazy, so why bother?
Here is the source for how Unix interrupts are handled in SBCL https://github.com/sbcl/sbcl/blob/master/src/code/signal.lis...
And the manual for handling async threading in SBCL http://www.sbcl.org/manual/#Threading
From the most basic level of syntax, something like https://github.com/TeMPOraL/cl-wiringpi2/blob/master/example... doesn't seem that different to me than the C equivalent I've written for embedded devices.