IMO you don't need to learn too much math before programming to be a good programmer. Basic operations are good enough, if you need more you learn as you go, you'll never know what you need. Lazy learning, like lazy variable loading.
The test you are making are more like problem solving which is closer to what programming is and not math. Of course basic math (+, -, /, *), but what it really matters is knowing what operations to do. The operation itself is the easier part.
A better test would be to make a chose between which approach to take to solve a problem and explain the rational behind. That is what will define a good programmer.
Also what better indicator you want than programming itself? You'll never make a choice based on this.
Also one thing is to figure out how to solve "here are 3 consecutive integers with a sum of 69. What are they?" than solving "here are X consecutive integers with a sum of Y. What are they?". Same in "Adriana’s age is 1/3rd of her dad’s age. If her dad is 36 years old, how old is Adriana?" vs "W’s age is X of Y’s age. If Y is Z years old, how old is W?". The complexity increases a lot.
First, the algorithmic part of programming is a lot like doing algebra. So much so that algebra is an important part of any serious CS syllabus. Clearly "writing a function or a class" is not the skill they are testing here, but problem-solving.
Knowing how to solve simple algebraic problems engages your "problem solving" skills in a similar way as solving something by writing a program. Note that algebra is not "basic math (+,-,/,*)" like you said. Algebra, like programming, requires the ability to understand and write abstractions, and to figure out how to approach a problem.
> Also what better indicator you want than programming itself?
They clearly want to establish a correlation between something else and programming. This is interesting in itself.
Is this a joke? They care because they are selecting applicants to a programming boot camp and want to select the ones more likely to be successful. I suppose they could make all of the applicants learn to program and select the best ones, but there could be issues with that. And they are pretty clear that they are defining 'good programmer' as 'good outcome in their course'.
> A better test would be...
Prove it.
Just make a programming test. IMO is more certain than relying on knowing algebra well or not. But I don't have any data to show you, is just me thinking about it.
>Prove it.
I can't, you are right. I just said what I was thinking.
In other words, an algebra test just acted as a noisy, imprecise measurement of general mental ability.
It’s fairly common for incoming Computer Science majors to ask the question, “Why do I have to learn all this math if I just want to learn to program?” The correlation above suggests a possible answer: The ability to understand basic mathematics is likely correlated with the ability to “think algorithmically,” which is well-known to be a foundational skill for expert programmers.
If it's just a weak (as the chart suggests) correlation, it doesn't mean learning one will make you better at the other. And if you do assume causality - it can go either way. Perhaps learning math makes you better at programming. Perhaps learning programming makes you better at math.
My 2 cents... It's more complicated because there are other things involved. Perhaps it's the nerd gene that makes people who like computers also more likely to play D&D and be in band. (I was a card carrying member) Does one of these 5 variables cause the other, or are they all part of the same thing?
However, it is correct that educators would need to do a little more research if they want to try and apply these results to a classroom.
A much more interesting experiment would be to add the algebra test back into the end of the program and see if there is improvement and how that correlates (or doesn't) to programming competence.
I learnt to program when I was 13 and that was in the middle stream at my upper school
That is to say, when I was in school before university there was no choice for studying programming.
There was but it was offered at a private religious school/camp, not via the public schools.
There are A LOT of different programming/automation paradigms in the world, lots of different languages, declarative, functional, OOP, imperative, logic, etc etc, maybe some of these students just can't get the convoluted mutable and unsafe structure of a Python or Java program but they might grok and appreciate the safety and simplicity of ML/Haskell or the flexibility of Lisp, why would you turn them down in such a way or force them through a path they don't really need to go through?
During my first year of Bachelor Computer Science at University we had introduction to programming in Scheme. It was a mind-expanding experience to me, I already knew how to program in C, Java, C++ and C# with a bit of PHP/Javascript back then and I was much ahead most of my peers but still I sat at the computer, trying to earn this alien language to me, and luckily I was humble enough to force myself through it and adapt my mind to a different paradigm. I know for certain that a lot of my friends and colleagues who already knew how to program struggled hard on that exam and some had to re-take it the following year (Which was unfortunately changed to C++ and they managed to pass without troubles) simply because they didn't think Scheme was a "real" programming language or useful at all. It was just too different from their previous experience.
What's even more interesting, I know some people from that class who had never touched a programming language before and weren't particularly strong at math. Those people are the ones I recall enjoying the course the most, they found it the easiest among all the other courses we had and passed it with excellent grades. Simply because their mind was apparently better wired for such paradigm and they had no preconceptions or prejudices that prevented them from learning it properly.
So my bottom line is, what makes you think that some people might not just have a "differently wired" brain that makes them think more easily with a different non-imperative paradigm for programming?
Some people can grok recursion in a functional programming language much faster and more easily than they would understand the concept of branching, variable assignment (with mutability) and non-pure functions (what we used to call "procedures" back in the day).
So much teaching material for beginning programming is tainted by the cult of mutability that I have coworkers writing for loops instead of maps or the like for relatively trivial tasks. The end result is I have a harder time reading code, and the amount of possible bugs grows.
Only school I know that uses Scheme that much.
It's just one intro CS201 course. There might be another Scheme elective, though.
From the test questions, this isn't an "algebra test". It's a word problem decoding test. That's appropriate to programming, where you have to go from an informal specification of the problem to a formal one.
X + (x + 1) + (x+ 2) = 69. Solve for x.
Is that algebra? Is that not algebra? Is that some sort of more nebulous skill that's also correlated with programming? If they actually wanted to correlate ALGEBRA algebra with programming, I also thought they'd have more straight-forward questions. Using word problems brings all sorts of other issues into the picture.
I know this is a socially unacceptable opinion but I think those correlations exist.
1) IQ tests are really long and expensive, so difficult to implement
2) They didn't correlate with performance as well as algebra.
For example, when x,y and y,z are perfectly correlated, we can see that corr(x,z) must be greater than 0.9 from: http://www.wolframalpha.com/input/?i=%5B%5B1%2C1%2C0.9%5D%2C...
But just because those two things correlate with IQ (which is probably true, so let's just grant them as true even without evidence for the sake of argument), that does not imply that they are sufficiently strongly correlated with each other to be interesting for hiring purposes.
To see my point, consider that being red is correlated with being purple, and being blue is correlated with being purple, but being red is not correlated with being blue. (at least not without evidence).
I don't think such a hypothesis is awkward but I doubt high IQ as an excellent predictor for an individuals performance in tasks that are not IQ tests. As a layman when it comes to psychology I think IQ measures some things but the way the brain works, no real world task a person does is exactly like those IQ tests.
A brain is not a CPU and an IQ is not its clock frequency.
What's not clear about the term?
> 30-60% of CS college majors have failed their Introduction to Computer Science course because they simply could not learn to program.
This "You either have it or you don't" mindset is strikingly elitist. I simply don't buy the idea that there are people out there who absolutely cannot learn to program. I mean, put someone on a desert island and tell them they can't leave until they can write a program that uses a linked list, and I'm pretty sure most people would be able to get off the island eventually.
Of course, that doesn't mean that some people can learn to code more easily than others. But I'd be willing to posit that the vast majority of people could learn to code given enough time and the proper instruction.
There is a No True Scotsman argument that arives here "Well, if someone later became a programmer, despite failing those tests earlier, then clearly they were skilled at becoming a programmer" blah blah; but it is still an interesting phenomenon. And yes, there will be outliers, of course; and there will be people that gradually mull their brain to change the way it thinks; but, it is still an interesting phenomenon.
I bet we could see this in other fields, too, it's just that programming is presently the hotness.
(edit: added more)
As for elitism. What is elitist about it if that /is/ true? Oh no, Billy can't program, but he's great with cars. Sarah sucks at linked lists, but she has a better understanding of the human body than any of her peers in the ICU. And so on.
Probably just anecdotal but curious to know what others feel about spatial reasoning? I think specially with object oriented programming it is helpful to be able to visualize all of the 'actors' in your head and how they relate/intertwine.
Spatial reasoning is a key mathematical ability for certain classes of problems (a big chunk of calculus as you mention) but helps not at all with another big class of mathematics (set theory for instance).
Finally, to you data structures are best understood via spatial reasoning, but there is nothing actually spatial about data structures so that is probably most likely just your own preference for modeling them.
I don't think such a hypothesis is awkward but I doubt high IQ as an excellent predictor for an individuals performance in tasks that are not IQ tests. As a layman when it comes to psychology I think IQ measures some things but the way the brain works, no real world task a person does is exactly like those IQ tests.
The interesting blog post submitted here is talking about the bread and butter of "industrial and organizational psychology," namely about how to select individuals for a training program. There are three generations of published research on this topic already, and there is a huge amount of ongoing research on this topic, because organizations all over the world want to figure out how to select successful applicants when there are more applicants than places in school or work programs.
The short answer is that there is a HUGE body of research to show that the single best hiring process you can use for hiring a worker, if you want to get a worker who will perform well on the job, is to use an IQ test for hiring.[1] The long answer is that some other applicant characteristics matter too, of course, but the single best thing to look at in a job applicant is "general mental ability." Work-sample tests are also very good for hiring for specific jobs, and are grossly underused in hiring in the United States.
To the point of the interesting submitted blog post, one always has to be empirical about these issues. The people running the bootcamp so far have found data that suggests that the algebra test they have tried is a bit more revelatory than the IQ test they tried, and less expensive besides. One response to that might be to suggest a test like the Wonderlic test (an inexpensive IQ test designed for company hiring procedures) but in the end, results matter. If empirically at this bootcamp, the algebra test works better than some other selection procedure, it doesn't even really matter why it works, just that it serves the bootcamp's purpose of identifying successful students from among multiple applicants. The data set is still small. I am very glad that the blog post includes a scatterplot of the data. More bivariate data should be shown that way, in blog posts on dozens of topics.
[1] My FAQ post on company hiring procedures, which I am still revising to put on my personal website after composing it for Hacker News, provides references for this and some commentary on legal issues in hiring.
there is a HUGE body of research to show that the
single best hiring process you can use for hiring
a worker
The best process, assuming you have to use a single process for every job in the world? Or the best process even when hiring for a specific role?I can understand IQ being the best choice if you had to judge poets, plumbers, town planners, golf instructors, programmers, salespeople and warehouse workers using the same process. But surely if you're exclusively hiring for one of those roles, you'd want to test their domain-specific knowledge?
> Work-sample tests are also very good for hiring for specific jobs, and are grossly underused in hiring in the United States.
One other thought occurs to me. Would the same algebra test produce similar correlations with courses such as English and History, when adjusted for the differences in overall pass rates for those subjects?
It seems odd that something as interesting as programming isn't introduced until college. That seems like accepting students as music majors, who have never played music before. If I had my druthers, computation and simplistic programming would be part of the mainstream K-12 curriculum.
Well, there's the reason you have a high drop-out rate in CS. People don't know what it is! Computer Science is not a vocational program. Computer Science is not computer programming.
If someone had talked to these kids asking “Why do I have to learn all this math if I just want to learn to program?” and told them "You don't have to learn all this math if you just want to learn to program" before they became CS majors, then they might not have become a CS major in the first place.
Maybe, once they realize what is the difference between programming and computer science, they prefer computer science. Problem is you cannot learn them the difference in a few weeks; it takes years to sink in.
So, what do you let them do in the mean time? Waiting is a waste of the years in which learning is easiest for them. So, do we let some grown-up decide who likely will make a good computer scientist, or do we let many more start on that trajectory and see how far they get?
I think the latter is the better choice, if we also provide smooth ways to move from one to he other.
[slightly related: I once read a teacher in a nursing school state: "when they come in, all the boys want to ride an ambulance, and all the girls want to work with kids. We have to work a bit on that in the first year"]
You might not be a great programmer, but you'll be able to do it.
Being good at number rules (school algebra) does not have much to do with being good at state rules (code algebra).
I'd say there's more correlation between functional programming and school algebra, if anything.
What they _are_ saying is that the test is an efficient, effective way to do it, with better ROI than IQ tests.
It doesn't actually matter _why_ that's the case, though Animats' suggestion seems plausible: "From the test questions, this isn't an "algebra test". It's a word problem decoding test. That's appropriate to programming, where you have to go from an informal specification of the problem to a formal one."
I think that people would do much better in intro CS classes if we could get rid of some of the hype around programming and tech in general, and treat it like other courses. That way, far more of the students would be genuinely interested in the subject.
The main thing I fall back on is Heron's Problem. The classic algebraic/analytical solution is to find the local minimum of a function. Which works, this method isn't flashy yet completely functional. It requires no insight, just mentally vomiting something you learned in high school/college.
The geometric solution is far simpler, and offers insights into the foundation of trigonometry. Its really so simple and eloquent you feel like an brutish idiot doing a long form analysis.
I offer this because tools are merely tools, and math supplies many tools. You wouldn't hire a finish carpenter just based on his/her ability to swing a sledge hammer would you? I mean s/he will have to swing a hammer, but will it have to be a sledge hammer? Will they have to use a nail gun? Screw driver? I hope they are proficient in all of these tools, and since I'm outsourcing my time to them, I hope they know which tool to use at which time.
They should follow the lead of those who figure out earlier they're not cut out for programming: embellish the CV with fake study, and go straight into the workforce as a programmer. Or they could get transferred into programming from some user department pitching to bring a "valuable user perspective" to IT. Or grease up some IT manager after-hours who'll bring them in as contractor with "special skills not readily available in the labor market" to bypass the usual HR checks and aptitude filters. Or if they're not of the same ethnicity as the HR personnel, send in a double to sit the aptitude test for them, knowing HR staff don't check applicant ID's too closely because they know no-one's going to make it through to an IT interview if they do. Most HR staff and IT managers just want the staff count up, they don't care whether they're productive or detrimental to the projects. If they have staff they have people to blame.
Both forms of the hypothesis are pretty easy to refute and have already been refuted. See http://code.org and http://madewithcode.com for example. I just taught 2nd graders programming yesterday who don't even know how to divide, let alone any algebra.
And of course, there's always the radical idea that maybe you could actually teach them some algebra skills in the context of programming tasks. Just like some calculus students may need some remedial instruction and support.
Luckily, there's a whole field of research with several articles on this very topic. The field is called computer science education. See SIGCSE, for example: http://www.sigcse.org/
In fact, the field has already researched and debated this issue before, as well. There was a controversial article called "The camel has two humps" in 2006 that made the same claim as this post that a simple aptitude test could predict whether someone could learn how to program or not. The article (which was never even officially published) was later retracted: http://retractionwatch.com/2014/07/18/the-camel-doesnt-have-...
We are focused on making people professional, employable programmers. While I agree that everyone can play with scratch or build a basic webpage, that's different than the programming skills required to be employed in most dev jobs today.
(Codeup CEO here.)
In both genders,performance on Mathematics was found to be the best predictor in programming ability,followed by performance on a spatial test.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.110...
http://uhlbcmcbx4a4vt1d.zippykid.netdna-cdn.com/wp-content/u...
Computer science and development in school is very different from computer science and development in the real world. You don't get to know what you have to know before you know it - that information doesn't exist in the ether of the collective consciousness anywhere. You have to draw it out from yourself.
You might not even know the words for the concepts you have to create in the real world - because they aren't defined, and it's different from pattern matching, finding invariants, optimizing around invariants, simplifying semantically and forming relational constructions. It's different from probabilistic modelling and inference. It's different from a machine doing all those things and a human reasoning on top of it, turtles all the way down (or up, rather).
If you really want to teach people, you have to be able to believe that every person has the capacity to exceed their boundaries and even perhaps demonstrate that you can exceed your own.
This is really philosophical at this point, but don't make the mistakes I've seen tons of educational institutions make. Don't define your students before they learn how to define themselves. Coding at it's core is a creative endeavor. If you want to build robots, code. If you want to build students, learn.
In that light, the findings become less surprising---still interesting though, since we can test math-word-problem-solving ability quite easily with pen and paper. Could this mean my KhanAcademy math achievements are a better hiring signal than my github?
In terms of hiring, it certainly makes more sense for candidates to pass a 1h test rather than expect them to "prove themselves" by solving a week-long coding challenge...
Can we please stop calling them ninja/rockstar/etc programmers and simply use "professional programmers" or "master developers"? This language is cute when you're 16, I guess, but personally I don't want to be compared to a middle-ages asian hitman for hire.
Many students coming into an intro to CS class are already programming whizzes, and then there are others who have no experience with programming at all. So, already the instructor is in an impossible situation. Do you bore the promising kids with the major head start, risking losing future good students? Or do you ramp the class up to their speed and risk losing the kids who didn't come in already knowing the material? Obviously, most departments and instructors will, wittingly or not, choose the latter.
Reality is that most programming is taking bits out of one bucket, combining them, and dropping them into some other bucket. It's the digital equivalent of shovel work.
What kind of "programming"?
What "ability"?
Ability to write space-O(my_god * n) sorting algorithms? Ability to duct-tape queries to an ill-documented SOAP webservice and upsert the return into a Your-Boss-Normal-Form database schema? Ability to brutally optimize the runtime of a really hairy numerical analysis algorithm? Ability to design a centralized data pipeline architecture and lead 12 hackers during the implementation and migration? Ability to find the off-the-shelf OSS project that solves the first 50% of the problem instead?
I was watching the news the other day on TV, which is something I almost never do, and there was a segment about the educative value of tablets, smartphones and laptops for teenagers and children. The value of the freely available information on the internet was discussed, along with the quality of free and... not free educative and scientific applications, and the ill effects of such devices on people's attention span, and... That's it.
Example devices included many, many Apple products, a few off-the-shelf Android devices. A Windows machine running MS Word.
Same ideology regarding technology in education: locked-down workstations with user-friendly applications, internet access which blocks every port except for 80 and 443.
A relentless and sustained effort to erase everything but the topmost layer of the IT stack. Locked-down devices. People come to me asking me to remove viruses, to speed up their old XP machines fraught with annoyware. No Windows license, don't want to pay for one, don't want Linux. Computers run on magic, right? How can you know that programming stuff and not be able to remove all that bad magic from my computer. Computer Science students who can't find the slash on their keyboards. People who see computers as a monolithic entity rather than a brittle but transparent stack of conceptually distinct layers. The new generation of systems administrators, can't scp a tar at the other end of the lab.
A teacher spending three hours explaining red-black trees to a class scrolling down 9gag; students, diploma in hand, who aren't entirely sure what's the difference between an array and a hash, who are not sure of what happens when one puts an array in another array, in PHP, after 120 hours spent theoretically writing PHP.
Computer Science students convinced that Linux is 100% not worth learning, in any way, because nobody uses it.
This is how the knowledge economy dies.
2) what the exact fuck is "programming aptitude" anyways? The article defines it as "having good grades in a CS program". Which is part of why I needed to rant against CS education.
I learned Linux, and used it all throughout school. We never touched Windows or learned about how to develop on it.
I am now a full time, happily-embedded-in-the-windows- ecosystem C# developer.
I think fundamentals + balance of education on implementations is a more practical approach for a CS curriculum rather than "OMG LINUX IS TEH BEST AND ONLY".
The numbers were arbitrary. Just what felt easier to understand for the staff.
Also I think those are some pretty weak quantitative results, and good exhibits for argument against p-values.
It may be controversial but running this place has shown me that not everyone can program competently or is cut out to be a professional programmer. Anyone who says "everyone can code" is mistaken.
A lot of people in the field, myself included, get a hint of intuition that some people either have it or they don't.
If we could get at (or closer to) some basic principle, we could both help people focus on their strengths (by identifying those who show this trait, and being transparent with ones who don't).. and hopefully learn how to cultivate the underlying trait itself.
Wouldn't it be funny/cute/useful if just practicing a bunch of algebra word problems made proficient programmers even better?
https://www.youtube.com/watch?v=5OL1RqHrZQ8
(In short: p-values are pretty much useless as an indicator of "this experiment can be reproduced".)Not that this necessarily refutes the original article, I just thought it might be relevant to discussion.
I am REALLY terrible at algebra (to the extent that I am borderline LD in math), and I have completed a degree in CS and hold a steady job as a software engineer. And I am definitely not the only one. Ask a room full of developers how many are bad at math, and I guarantee the results will surprise you. On the other hand, I know a plenty of folks I went through school with that were math majors so good at algebra they could do full page derivations piss drunk without at hitch. And yet, they would take an intro java course and be totally lost. If they didn't get a java intro class, how do you think they would have done with something as algorithmic as an assembly language, a functional language, or understanding the nuts and bolts like Turing machines and automata?
Algorithmic thinking is definitely an important part of programming, but it is just that, only a part. And quite a few types of development don't emphasize that sort of thinking. I could perhaps see this being a PARTIAL solution for areas where one would be doing a lot of functional programming, but the vast majority of the job market these days is still OOP.
This could really quash the job prospects of programmers who are perfectly capable of writing quality code but are poor with algebra.
Also, nobody is claiming that the correlation is perfect. You can perfectly well see outliers in his graphs, but outliers don't disprove the correlation.
if a then
return b
else
return c
===
return (if a then b else c)
I.e., “return” is distributive over the branches of the “if” expression, so you can factor it out.When you see “bad” programmers redundantly specifying Boolean expressions like “x == true” instead of “x”, it’s because they haven’t internalised the fact that “if” takes any algebraic expression denoting a Boolean, not just a special sort of expression with “&&” and “||” and “==” and “<”.
Beginning programmers often struggle with learning the elements available in a language and how they can be composed—and that is exactly what an algebra is. I would be very surprised if an aptitude for algebra did not correlate with an aptitute for programming.
I would argue that your last several examples (functional languages on) are more likely to be understood by people really good at math than Java is, and that Java isn't especially predictive in that regard.
Functional programming languages are closer in their structure to the way that math is structured than Java itself is, while Turing machines and automata are pretty much just outright math. (Especially when you start talking about the computational power of automata versus Turing machines and the equivalence between some kinds of grammars and automata.)
I expect you listed these because they're "hard" computer science, but I'd argue that you think they're "hard" computer science because they're mathy and not your forte.
I, for one, had much less trouble with Turing machines, autoamta, formal languages, computability, etc than I did with learning Java.
I also found Turing machines and automata much easier than conventional math classes. The point I was making was not that they are necessarily harder, but rather that they require a different type of mathematical thinking than your typical algebra based math. I would wager the same true for discreet math.
As for functional languages, I admittedly have never programmed in one and perhaps spoke out of personal ignorance. I have, however, heard from colleagues that back my original statement. It's basically hearsay, so take it or leave it.
I don't think it's oop as you suggest - I saw same correlation in a college c++ class. Half the class had easy a's, the rest had really hard c's. I've never seen such a bifurcation in any other college course, usually it's more gradual / continual distribution.