If you believe that you have never tried to teach a class of non-programmers.
Instead, programming keeps getting harder on new computers all the time, especially ones made by Apple
Apple have made a lot of efforts to bring programming to the common user: Hypercard, Applescript, Automator. They are abandoned because no-one was interested in using them.
Also, with XCode/Instruments downloadable for free, easier to use, and more powerful than ever, I think it's a bit silly to slam Apple for this one. There might be an argument for slamming Microsoft, but VS Express isn't that bad IMO. There is definitely an argument for slamming nix, but decentralized authority is inherently incompatible with the kind of reform that would make the nix desktop programming experience more palatable to newcomers and open source libraries are a boon for function discovery, so it's still a bit of a wash. Web development as a desktop platform has inarguably improved by leaps and bounds over the past few years. StackOverflow has dramatically smoothed the learning process for all of these.
I'm pretty sure the long term trend is exactly the opposite of what OP said: getting started with programming has never been easier, and the people who make programming environments have been working tirelessly and trying everything in their power (even moonshots) to make the process easier and more newcomer-friendly. None of the moonshots worked but the more traditional efforts have paid off in spades.
No, you’re trying to teach them with the same kind of tool that I call ‘clumsy and unsophisticated’ in the article.
“Apple have made a lot of efforts to bring programming to the common user: Hypercard, Applescript, Automator. They are abandoned because no-one was interested in using them.”
In many ways HyperCard was still too difficult, but it was a great tool. Many people did use it to make simple programs, and some became programmers from it. I’m not arguing that everyone should learn to program to the level of being able to create and sell apps; HyperCard is a good example in that sense.
AppleScript is a failure because it’s a terribly difficult language even for most working programmers. Its designers completely failed in that respect.
Automator is not programming.
Also, your whole premise is just wrong. There's been a ton of research showing that the hard thing about programming is not syntax, or tooling, but thinking in abstract terms (I'd link, but I don't have access to most academic publications these days :( ). You're not the first person to think "Hey, if only we could make this whole programming thing more accessible to everyone, everyone would do it!" And yet nobody has succeeded, or even gone close. The best we've managed is to make it easier for an interested amateur to learn to program normally. We've got good documentation available on the Web. Every PC can easily install a wide range of language development environments, for free. Large numbers of frameworks are available to allow you to avoid having to do the heavy lifting for areas where you don't have the time / brainpower to become a subject matter expert. These are all positive developments, but the fact is that 99% of the population still can't even explain in plain English (or Spanish, or Chinese or whatever it is they speak) to a developer what it is they want a system to do.
When was the last time you saw a spec written by a non-programmer trying to explain a business process? Did they even try to explain what happens in error cases? If they did you were a lucky sort, because I've seen dozens of these types of documents, and none contained information on how to handle error conditions. They'll tell you to pull an employee out of the database based on the name, without stopping to think that several employees might have the same name, so you need to have some unique identifier. They'll tell you that an action needs to be done on the first of the month, but they'll forget to tell you what to do when the first of the month is actually a public holiday. These are the problems that we, as programmers, are trained to deal with, and which most people just don't seem capable of doing. And "easy" tools aren't going to help them, because the problem isn't the tools, it's the abstract thinking.
Despite the vitriol you've received, I think I understand what you're trying to say here. I think you want a higher level programming language. And not merely a successor to the latest zeitgeist, but essentially what C was to punch tape. That is, binary might be fundamental to computation, but it isn't fundamental to the ergonomics of programming. So the question is, is it possible to somehow abstract programming to an higher level?
pg wrote that all other language have evolved towards lisp. I think the trend is a special case of a trend towards functional programming. So imho, whatever comes next (given there is a next) will have to (for lack of a better word, ) supersede lambda calculus.
Automator is not coding.
Personally I reckon that you can teach programming to anyone who can learn a card game.
The difficulty in programming isn't the mechanical act of writing code; we have copy-paste for that. The problem is having a mental model of the program execution, and mapping it to the real-world problem. This is the difference between good and bad programmers, and it applies to the population at large.
Case in point: I'm working with some soon-to-be grads from a CS program. In Java, one of them instantiated a class, and set an attribute of that instance. Elsewhere they instantiated a new instance, and tried to read the value back. And they couldn't understand why the value wasn't set, Portal-style, in their new instance. This guy, who has been in school for 4 years for CS, also couldn't figure out why it made sense to make specific shapes (circle, square, triangle) children of an abstract Shape class. He could, very easily, add a Search bar to an Android app because he had seen a tutorial on how to do it - but making the bar do anything was a tremendous feat, because it wasn't just the mechanical repetition of some pattern.
Even with experienced (>5 years) programmers, I've seen some really terrible debugging where it was clear the person didn't have a mental model of the code. They never really got any benefit from GDB, because when they looked at the internal state of the program they just shrugged and said "Yeah, looks right". And they owned the codebase. This was their code, and they couldn't reason about it.
After all that ranting, my point is: you can make it easier to write some program, but you can't make it (appreciably) easier to write the program that you need.
Is it possible that these students might have other types of intelligence?
I've got friends who are very strong visually and they thrive in visual environments such as MaxMSP and Quartz Composer. They make amazing interactive art pieces. Yet, they seem completely baffled by general purpose programming and symbolic logic. The thing is, you can talk to them and they can absolutely reason about things. They're not incapable of logic, rather a certain expression of logic.
What worries me is that we've created a negative feedback loop. The tools we've created for computing are heavily dependent on symbolic logic. This attracts people who are gifted with symbolic logic. They in turn create more tools that work best for people excel at symbolic logic. And so on and so forth.
I don't think this is an issue with people being incapable of creating mental models. A gymnast has an amazing mental model of their body and it's relation to space and the things in that space. However, we have yet to create computing tools for interactive design that build on top of that mental model. Why is that? Because we engineers have some of the WORST mental models of how our bodies flow through space.
I think this is all mainly an issue with listening, understanding and compassion, traits that seem to be stunted in the software industry. Engineers seem to have a certain predilection to talk over people. We seem to always be waiting for the other person to stop talking.
And I'll take the poetic license one step further and say that our entire industry is incapable of listening. It is always "hey, lets set up coding camps"! What about, "hey, let's set up a symposium where us engineers listen to other people about how they live their lives and what they think!"?
That's overly general: a lot of programmers may be socially inept, but it's neither necessary nor sufficient. Both the examples from above are pretty anti-social, but they also don't happen to have these skills. Likewise, I know some very good programmers who are also good at requirements gathering and collecting domain knowledge, which entails learning from experts and listening to them.
> The tools we've created for computing are heavily dependent on symbolic logic.
The whole notion of computers, for better or worse, depends on symbolic logic. You'll have a hell of a time building a microprocessor whose instruction set is paintings or dance moves. The best case is that we build an interface from symbolic logic to this new, more approachable paradigm.
> They make amazing interactive art pieces.
How much logic can you put in an interactive art project? Can it model anything in the real world? The biggest problem with other paradigms is density: if you want to program via something other than symbolic logic, get ready for incredible fatigue as you try to turn a 10K LOC program into a 500MP painting. Or an 18 hour long dance.
I guess I should clarify: I don't think non-programmers are lesser beings, or that they can't model anything in their heads. But for the sake of programming, the only thing that matters is if you can model a computer. If you have other types of intelligence, that's fine, just know you're going to have a tough go of it when it comes to understanding the code you write.
> What worries me is that we've created a negative feedback loop.
<pedantry> Wouldn't that be a positive feedback loop? </pedantry>
Try as I might, I am having a difficult time figuring out how to express the previous sentiment through pictures alone. Especially if I want to ensure that others interpret the intent of my expression correctly. Pictionary is a great example of that difficulty. And consider that the concepts expressed in Pictionary are calibrated for a visual medium.
Symbols are precise and concise. Which are two good traits for expressing logic. The later trait is desired, while the former is required.
Don't get me wrong, there are geometric proofs that are much simpler than their symbolic competition. However, that simplicity is on a case-by-case basis in my experience. Where the majority of cases favour symbolic. Although I admit potential bias on this point.
When I think of documentation, pictures are a good way of giving an introduction to a code base. But when it comes down to expressing individual functions and interfaces, pictures can't beat code comments. And even code comments lose against simple enough functions.
If you can figure out a good way to express general purpose programming visually, more power to you. There is probably a good consumer market. But professionally, I believe it would be difficult to compete with symbolic programming. And not just because of inertia, but for the reasons stated above.
Finally, I don't think it is a matter of listening. I understand what it is like not to be able to express myself in a given way. But I also understand that I, like all people, have limitations. And I endeavour to work with people who complement, not just echo, my skill set.
It does make you wonder how many people would have made great programmers but gave up because they could not get past the step of learning how to code.
I look at coding and programming as different things.
Programming is the critical thinking aspects of developing software and a skill that everyone should master: starting with children.
The act of writing code, and more specifically using frameworks designed with the intent of being coded against, is unnecessarily difficult.
I've always spent more time fighting with bad frameworks than I do implementing desired behavior.
Once I figure out the peculiarities of a compiler, I'm good to go. I can code fast. But, getting to that point takes a long time. It is especially difficult for people learning to program when they have to learn to code (edit: at the same time).
Software development is fundamentally broken and it frustrates me to no end.
That's pretty much the opposite of my point: programmers want everyone to think like programmers (per your definition). In my opinion, this kind of "critical" thinking is not necessarily a positive: there are other ways to think critically that don't map well to programming. This seems like the typical HN view that everyone should think like I do. Some people don't get math, or symbolic logic, and that's not stopping them being a value to society.
Looking at your profile, it seems like you have a horse in this race. Personally, Visual looks very slick, but I don't think visual representations are really appropriate for programming beyond toys (or maybe data modelling, but not procedural programming). It gets very hard to lay out a large program in 2D space, and the toolbox required to enumerate all the built-ins and libraries seems like a bit of a handicap compared to just typing them.
The trivium consists of grammar, dialectic, and rhetoric [1]. Grammar signifies knowing the language/jargon. E.g. knowing the parts of a car. Dialectic signifies logic and critical thinking. E.g. realizing that adding more oil won't fix the alternator. Rhetoric signifies application and expression. E.g. designing a safe, high-performance, cost-effective, and aesthetically-appealing vehicle.
Regarding your anecdote about not understanding abstraction of shapes. It sounds to me like your friend is comfortable with grammar, but not dialectic.
According to this redditor, a major flaw in the education system is that it teaches "subjects" (grammar) while critical thinking (dialectic) is shafted (mostly limited to math class) [2]. But I repeatedly read stories about people who can program without anything more than a basic understanding. I find this surprising because computer science is mostly math, which falls under dialectic. Why is this? Disclaimer: I'm new to programming.
[1] http://en.wikipedia.org/wiki/Trivium
[2] http://www.scribd.com/doc/36325362/The-Lost-Tools-of-Learnin...
Most CS comes down to logical operations and the scientific method.
Problem -> Research -> Hypothesis -> Experiment -> Analyze -> Results -> Reapeat until problem is solved.
You can program if you can reason well enough.
A lot of people make the mistake you mentioned, but the author makes a different, common mistake that is, in my opinion, even worse:
Programming is easy, after all: all you need to understand is conditions and repetition
Programming can be boiled down to conditions and repetition as much as playing chess can be boiled down to moving the pieces around. That's why there's a difference between playing chess with your mates and being the world champion.
Or, for a different analogy, try boiling down the act of driving a car to pressing pedals and wiggling the steering wheel. Yes, driving your personal car might be almost as simple as that, but driving a bus full of people or a race car goes quite a bit beyond that.
What I'm trying to say is that there are different problem classes that require different skill levels. While it's perfectly okay to want to make it easier for people with lower skill levels to solve problem classes that can be solved -- I hope you will forgive me if I call that the low-hanging fruit -- the author's stated belief is an oversimplification that has repeatedly been proven incorrect.
Computers that can be tinkered with still do exist, and it has never been easier to use them to program. Microsoft and Apple both provide free access to rich development environments. And it's easy to install many other kinds of programming environments onto such machines.
In fact, the combination of a web browser and JavaScript is perhaps the most ubiquitous programming environment, ever. And it is available on damned near everything.
> we need more professionalism in software engineering.
I completely agree. I think we need courses on The Mindset of Coding, teaching things like some of Bret Victor's principles, KISS, the UNIX philosophy, reverse engineering, …
Each time I open an article on HN describing the virtues of the latest programming language, I always see the word "powerful". Whether a language is Assembly or Python, the author can guarantee it's "powerful". I think it's devolved into a buzzword because programmers use it to mean opposite things. It's like how Orwell said two critics can describe the same painting as possessing both "a living quality" and "a peculiar deadliness" [1].
Like I said in another comment, programmers were able to abstract binary away from the tangibility of punch tape. But I think we can only abstract so much before we begin to hit a wall, beyond of which we begin to lose absolutely essential features. So once we reach such a point, I think complexity becomes conserved.
So if we want to simplify one thing, the best we can often do is move the complexity elsewhere. Like a what a refrigerator does with heat. When programmers say a low level language like C is simple and powerful, they mean that the implementation is simple, but the interface is complex [2]. But when programmers say that a high level language like Python is simple and powerful, they mean that the implementation is complex, but the interface is simple.
So I think the real question is, "Where is the complexity hiding?"
I don't know if we should treat all applications like domain specific languages, or if we should just hide complexity until needed, then provide a way for more advanced users to access lower abstraction layers, or what. Either way, I don't think this fairly strict but artificial distinction between using and programming is helpful or beneficial.
Most (and by that imprecise measure I mean > 50% but less than 75%) of the people who own an Apple "computer" have no interest in programming anything. Nearly all iPad owners are not interested in writing iOS programs, and easily 99% of iPhone owners could care less about writing code for them.
That's ok, but it means there continues to be opportunities to sell programmer's cool gear. The down side is that programmers can't always leverage the benefits on price that mass production brings for their tools.
I don't know many people who would modify or extend the software they use, even if it were really "easy" to do so. Also, I doubt such tasks could ever really be "easy", even if the arcane syntax of a C style language were not an obstacle.
I'm not seeing the problem. Regular people were forced to use DOS and Windows machines in the early days, and they struggled with their complexity. Now we have a class of computers that regular people and technical people alike enjoy using. That's not a win?
You also assume that programmability implies complexity, which is the argument I tried to refute …
They don't have the skills needed to do it themselves, aren't interested in acquiring the skills, but appreciate your end product and want to use it.
I think this is just fine. I don't really want to put in the incredible effort required to learn to play the piano well, so I'm glad other people have put in that effort and I'll listen to them play instead. Would I learn if someone made it a lot easier? Probably. But would that make me a musician? Could I make inspired changes and extensions to existing works? Or wholly original works? No way!
If you want an actual example of WTF programming, do something you did in the 90's in Visual Basic on the web. The amount of infrastructure and types of things you need to know how to program is daunting. It is still easier to program an app on NeXTSTEP at the time of the webs creation than on the web today.
I think we have room for all of it. Oversimplifying things, as an example, I love how Linux is extremely flexible, Mac extremely easy and Windows... well, let's say it's in the middle.
I believe sometimes someone got it 'more right' than others achieving a right on the spot balance. But that doesn't invalidate the other approaches.
You got to dance accordingly to the music.
I can recall the old days of printing out on fan fold paper and going through with a highlighter to debug a missing " or }
Hmm. What are those tweets about anyway:
>Considering writing some software to use MTurk to rank my self-portraits.
It's kind of funny that out of two examples given to demonstrate the utility of programming a computer, one is actually just asking people for their opinion.
Art is easy to the artist.
Writing is easy to the writer.
Music is easy to the musician.
Programming is unique, though, because the tools you use to do it can also be used to make the task in general much easier.
Lets not ignore differences in humans and say programming is easy because it isn't to a large number of people.
http://arstechnica.com/information-technology/2012/09/is-it-...
People just think it "makes sense" to work directly with various fonts and sizes when writing on a page, but that's actually pretty unnatural. No one ever worked that way before - you'd bang out a draft on a typewriter and hand it to a typesetter who would take care of that part.
It took at least a decade for people to really accept it in earnest.