I worked on polyhedra for a summer, writing code that could unroll a polyhedral model to its 2D net. Find the volume, the number of faces and all kinds of stuff. I met a bunch of interesting people and it was a blast.
I also fell asleep at my keyboard more than once. It was a beautiful summer, biking to work and working with what I still think is one of, if not the, best language ever.
Here's why all this is relevant - I came back to real life to study CompSci on these old Sparc machines. And it was like, here's the power button. What's an object in Java? What's a compiler? All reasonable stuff.
But: Wolfram Research and Mathematica had, in a sense, ruined my undergraduate life before it started. Why were we using all these bizarre tools? Can't we do this a million times faster? Why are we learning all these bizarre integrals?
It was similar to being denied graphing calculators in A-Level Mathematics (in the UK, think high school). I get it - we need to learn 'the basics' and survive without tools to some degree. But, it would have been nice to use them in some contexts and not just deny their existence.
There's an anecdote I think about Milton Friedman being shown people building a dam with shovels and not digging machines, to keep people employed in some God-forsaken country. He asked, why don't you use spoons instead? Then, more people would be employed.
Mathematica and Alpha are wonderful tools, and I highly recommend applying for an internship if you're of the right age or whatever the requirements are today.
This is the wrong approach. There's no way you wouldn't understand math much better with these tools.
I'm hoping that in the future, math will be less about equations and symbols and more about graphing and being able to move around in the spaces described by the equations.
I would draw an analogy with a compiler. After using it for some time, your brain will take on the shape of the compiler and you'll write correct (lol is it ever tho) code without even having to compile it.
Clearly we are more productive with the tools. However it is very, very easy for people to see the tools as magic. At some point we need to actually understand what it is that we are doing. For which those equations and symbols are essential.
Yes, the computer can draw a pretty picture. Pretty pictures are helpful in conveying information. But they are a horrible way to understand inherently complex topics.
For example pictures are essential for conveying basic concepts in in multi-variable calculus. But you won't make much sense of the topic until you actually understand the three basic mathematical representations of a surface embedded in a higher dimensional space (function, level surface, and parametrized coordinates), how each connects to the tangent to the surface at a point (whether that tangent is a line, plane, or something higher dimensional). And you need to understand this in an n-dimensional way because that comes up, a lot.
So no, we won't lose equations and symbols. Ever. They are essential, and there is no possibility of real understanding without them.
If you don't need to "understand" the math, why risk opportunity cost learning it?
> I would draw an analogy with a compiler
staying "high level" is a good thing for some programmers. If every web dev had to dig into the low-level working of the browser, a lot less would get done.
Sorry to hear about the rest of your undergrad experience. I was really fortunate that I never had a professor complain about me using Mathematica for everything. Even just typesetting math in Mathematica instead of LaTeX was a huge benefit for me.
One of my favorite classes was Computational Algebra taught by Dana Scott. He did the entire class in Mathematica. Each lecture was just him walking through a notebook and the problem sets were all about writing Mathematica code to solve interesting problems. I think I still have them somewhere...
Really? Mathematica is amazing and uncontested for symbolic algebra, but writing anything more than a notebook/paper is a nightmare. In built stuff is good, but functions you have to define yourself instantly become an incomprehensible mess of parentheses. Leaving aside the fact that it's a propriety language and the many flaws of its eponymous founder.
I long for the day SymPy or similar gets good enough that we can dispense with it.
And then she chose a question from the book, had one of us start typing, and she started at the board solving the same thing.
She finished first. Not by much, and obviously the calculator is the faster choice more often, but she finished first.
Though, as a story, the conclusion he draws is pretty self-congratulatory and bothers me a bit. The substrate on which you implement an algorithm like arithmetic doesn't really speak to whether you "know numbers." It's like the high schooler thinking being very good at computing integrals makes you good at math.
This is more of an interface problem I believe.
then again, they wouldn't have let me in that class in the first place.
This is obviously false, and I don't really understand the point of saying this.
Plus, the benefit of using a computer is that computation is effortless, letting you use more of your brainpower on actually interesting problems rather than something that is easily automated.
My high school classes in Victoria, Australia, had tests and exams both with and without TI CAS (graphic calculator that also does algebra and calculus with a pretty screen) and I agree that it's pretty nice. Interestingly, I think our curricula were based on the UK's originally.
I think it's similar to learning all the simpler math. You're taught to add large numbers on paper but in the end doing it with a calculator saves you a lot of time and effort. Learning to use a CAS or Mathematica or the like seems essential if you're going to be working in a field that uses calculus for practical things like engineering, medicine or finance.
So, to that end I agree here. They are absolutely wonderful tools if you know what you're applying them for. However, they can be pretty bad if you're trying to use them to cheat (just like everything else).
reminds me of one of the arguments against minimum wage hike theory: if raising the minimum wage makes workers better off and has absolutely no drawbacks for anyone, why raise it to a mere $15/hour? why don't we do some real good and crank it all the way up to $100/hour?
Seriously, though, to answer the question: there's a point where a raise in the minimum wage will stop being helpful, and everyone who supports the minimum wage understands this.
Students just need to be aware that they should learn from solutions provided by wolfram alpha (there's nothing wrong with that), otherwise they're going to flunk the closed-book exams anyway.
Just make the tests challenging-the ones who actually did their homework will pass. The ones who didn't won't.
Wolfram Alpha helped me many times-sometimes you're simply stuck on a problem and no one is there to help. But like anything, you can use it to cheat and not do work, or you can use it to help you learn more.
It may be difficult trying to teach these kids how to perform the math itself when they can show the homework, but ultimately they are failing themselves when it comes down to needing a fundamental understanding on the process itself.
This reminds me more of teachers scoffing at calculators being a crutch. It comes down to the students' willingness to learn, not how to thwart cheating on homework.
One of the easiest ways to inflate grades without explicitly lowering standards is to decrease the proportion of the final grade that depends on the midterm and final. Cut out the midterm for "more instruction time" and leave a 20% final. Homework is now worth an absurd 80% of the grade... that's no good. Throw in a project, participation, or auto-graded "e-labs" with infinite attempts. Suddenly it's really difficult to get anything less than a B unless you really just don't give a crap. But also kinda difficult to actually learn because So. Much. Busy. Work. And, on top of everything else, we're punishing the one student who's not cheating.
Any time I see a lower division syllabus where closed-book exams are worth less than 40% of the grade, I'm instantly suspicious.
> but ultimately they are failing themselves when it comes down to needing a fundamental understanding on the process itself.
Yup.
> It comes down to the students' willingness to learn, not how to thwart cheating on homework.
In fact, you can leverage the calculator or WA to teach the material at greater depth. No longer need lots of practice with trig identities or u substitution? Great. Maybe we can write a few proofs instead, or use the time to work through a large case study.
If someone were really serious about abusing the service they could pull out a C in the class even with hard failures of the tests.
I'm not sure how much to read into this, so forgive me if I'm misunderstanding the tone; but calling an F after not doing any of the work oneself a "surprise F" seems a lot like calling it a "surprise loss" when one hasn't shown up for any of the team practices. (If you were just observing that students are surprised by such F's, no matter how often it's pointed out to them, and no matter how often they've met such outcomes before, then I am forced reluctantly to agree.)
Source: did this, got a C in freshman calc.
Some students will inevitably have access to tutors or parents who can give them the full answer. With wolfram|Alpha all students have access to this.
What this really show is that homework isn't the best tool for gauging student prowess.
This is not even remotely true. I think you seriously under-estimate the value of parents/tutors.
Students with tutors/parents will not just get the right answer. They will also be led to the right answer, over and over, and told what they need to practice in order to improve. Explanations will be tailored to their learning style. So they get direct help on assignments, but they also get regular indications about how to improve their performance in scenarios where they don't have that help.
Wolfram Alpha just gives the answer.
If homework in 90% of the grade, maybe it levels the playing field for letter grades in the course course... until a few years down the road when the student who didn't actually learn the material is screwed and has no way of catching up.
If homework is a more reasonable 20-50% of the grade, then the reckoning comes at midterm/final time.
> What this really show is that homework isn't the best tool for gauging student prowess
Well, that's certainly true.
The amount of hw students get in the us is astounding. It's not always productive either (more often that not it's not). And it seems like there's no pushback against it.
Poor-man's (student) tutor.
A machine usually doesn't help if you don't understand the concepts. If your stuck on a problem and one step along the way was off I can see how Wolfram alpha can help.
I would bet the vast majority of students are using Wolfram Alpha to complete homework.
Homework problems were oftentimes deliberately difficult, and attending tutoring/office hours was almost certainly necessary for most students to master the material.
I got my hands on an instructor's manual of the textbook, and it was a tremendous boon for my mastery of the topics. By having immediate access to the solutions of difficult problems, I was able to comprehend how to approach problems of that type, and therefore could solve more difficult but similar examples in the future. The cycle of attempt/fail/check-solution/repeat was really effective. Waiting for the instructor's office hours or the availability of tutors would have made this process, if not impossible, incredibly inefficient.
Do any math educators have any insight to this? Is this math department clinging to an antiquated curriculum in which faculty is something of a gate-keeper to knowledge? Is there a good reason for their distaste for 'going around' them?
Math professor here. I am most certainly happy if my students get help outside the department, and I think my attitude is quite typical.
We can be a little bit wary of some kinds of help. Too much math teaching consists of "If you see a problem that looks exactly like X, here are the steps you should memorize to solve it."
But we don't care per se if you can solve problems of the shape X, Y, or Z. We want you to develop your skills to the point that all of these lie naturally within your skill set, that you could do them even if you've never seen one exactly like that before. As such, some kinds of tutoring can be counterproductive.
But most aren't. In my opinion your professors' attitude was quite foolish. Kudos to you for seizing the initiative and figuring out for yourself how to best learn the material.
A significant amount of math testing is basically checking if you've memorized some theorem (and then can solve it), so is that surprising?
My way of saying it is that it's great if you get help from any source you can, but it's way too easy to get something that seems helpful (because it makes short-term goals easier to achieve) while being damaging in the long run. It's fantastic if you had the personal discipline to use a solution manual to deepen your understanding, but there are lots of students who will use the solution manual as a copybook—the material in it going, as the saying goes, from page to pen without passing through brain on the way. Since I, as a teacher, don't have a ready way at the beginning of the semester to distinguish the students with your discipline from those without, I'm just going to discourage everyone from using solution manuals—but, as long as your homework solutions aren't copied from it, I don't care much if you go against that advice.
I understand the arguement that math is important, but the way it's taught in America, even at the AP level, is criminal. It boils down to "if you see this pattern, apply these steps" without any effort at going beyond. We teach "how" but not "why", which I think is a common refrain when talking about the American education system, or any test driven education system. Math is a means, not an end.
The best part is that people in the school administration might do something like blacklisting Wolfram Alpha on computers in the school library and feel like they've dealt successfully with the problem.
I discovered that the students really don't understand the concepts and more importantly don't want to. They just want to mimic problem types. They don't want to understand the why. Plug and chug is their true desire.
Using the computer exposes right away if a student understands the concepts. You can't get started on a problem if you don't know what to tell the computer to do or why. I went back to the old paradigm. A few students got it but most never understood.
The problem is that you are forcing people who don't care about it to learn it. Ofc they'll take shortcuts.
> Plug and chug is their true desire.
I mean, most tests are very plug and chug so it's no surprise.
Also note that it might be unreasonable to expect students who never used these tools to use them for the right purpose. Mathematica is a complex tool but I can't imagine that much class time is dedicated to the tool itself.
As an employer, if I hire you to do a task, let's say build a toaster, I couldn't care less about how you achieved that - as long as I've got my toaster and it can help me grow my toast-making business I'm happy. Education should work the same.
Edit: but to be honest if you needed to give the result Z for a set of inputs X and Y without worrying about invalid/malicious input, then a Stack Exchange copy/paste is totally fine by me.
What the heck kind of article is this...? I can’t read it seriously.
WA uses NLP for some things, but not for solving equations.
I wasn't even that dev-savvy (and still am not), but it was super easy and was a huge time-saver on timed tests.
I obviously had to learn the concept in order to program the equation, but once I did - why should I have to go through everything manually every single time?
Since we developed the tools, we obviously had an understanding of the concepts and we allowed to use them for homework, and tests and such.
And it kept us from being bored and disruptive, so it worked out for everyone.
Dry lectures aren't the best way to learn something: hand-on work is. Listening to a teacher read a powerpoint is a waste of time, because you could just read (or watch a video) on your own.
But having an expert on the material standing over your shoulder, helping you through tough spots while you're working through it? That's valuable.
So by the end of the course, I would just figure out the integral (or rather, the triple integral), plug it into Wolfram, plug the outputted answer into Webassign, and if the answer was wrong I'd backtrack until eventually the answer that Wolfram outputs is correct. At which point I'd solve the integral by hand.
---
There was one time when the homework problem asked me to find the intercept of this obscenely complex trigonometric equation. I tried for a solid half hour and couldn't solve it, so I went to the math help room to ask the TAs for help. I ended up stumping three of the TAs. A few days later, I went back to see if they had figured it out. Turns out they hadn't. They said we should just use Wolfram to get the answer.
The high enough level of the tasks beats any cheating attempts.
What we should discourage is students using Wolfram Alpha to just get the answers without any initial work. That robs students of the chance to learn and would definitely be cheating.
I can see some people saying that is cheating, but it is a learning tool to me. The class was 90% tests/ quizes. HW was a participation grade. Meaning a tool like that gives no one an advantage. Unless they are using it during the test.
Good examiners will ask students questions that make them think about why and how, not what the answer is. These often don't even need algebra or arithmetic because it assumes students know the equations and instead goes a level deeper to test understanding of why those equations are the way they are.
I had the gamut in uni. I found that engineering more often successfully tested understanding as I describe above. Some didn't; my thermo was a case of 'learn how the equations work, then read the right graphs and go' but particularly some of my fluid and aerospace courses were great at asking questions that really tested deep understanding of the theories.
One good example of this that I came across more recently is some of the edX courses that used to exist featuring Walter Lewin (before his sexual harassment came to light). He was very successfully able to question his students on the why and how, not just the what. This actually proved even more important in the MOOC environment, where you can't as tightly control the environment in which students undertake examination.
It's hard, it requires good lecturers really spending a bit of time devising questions as well as supporting their tutors as they teach the students, but it's possible.
1) applications rest on the shoulders of giants, it isn't efficient to learn too much more than you need to. If you only need to "understand" at a high level, you should just use tools that "do it for you" at lower levels - It's just a shame there are not better FLOSS competitors to Mathematica.
2) It should be clear what the "understanding dependencies" are - i.e. when some knowledge of the foundations can inform the higher levels, and when they don't. If I understand what 'sorted' means, I don't need to know the details of any specific sorting implementation to use it.
3) The way mathematics is split up into a million small, set-theoretically-abstract lemmas etc makes it so much harder to understand. It makes even familiar concepts hard.
Determining whether a student knows this is going to take a bit of work (particularly if they lack a formal way to specify it e.g. Pi notation or even a programming language), but we might approximate it by instead asking "What is the factorial of 5?" Now obviously this is not a perfect measure of what we're actually looking for even in the absence of calculators (e.g. someone might memorize 5! = 120), but it's easy to evaluate and is probably a decent proxy in the absence of calculators.
Obviously, I learned a lot from having it broken down like that so I wouldn't be as dependent on the tool now, but at the time, it was a huge learning aid.
Maybe when you ask general questions it uses an AI, but with functions and maths I think it only uses Mathematica.
How about challenging students with problems that are difficult even when modern technology is used to its fullest extend? Or teaching students how to build tools that solve their homework?
This would give them not only a thorough understanding of the problem they are solving, but teach them a very valuable life skill of finding ways to automate your work, and finding ways to package your expertise into software that can be run by a person without your expertise.
This would amazing!
I'll limit myself to Math since that's the topic of this article:
Calculus sequence: CS 1 is not a pre-req. And there's not enough time to teach both CS 1 and Calc 1/2/3 in a single course. "Implement it" works well for derivatives but not integrals. You're not gonna teach Risch, and implementing integration tricks isn't particularly insightful IMO. The cost/benefit ratio explodes in Calc 3, and the physical intuitions become as important as than the calculations.
Everything past that is proof-based and now you're kind of in "your homework is an open research problem in combining NLP with theorem proving" territory. Maybe with the exception of particularly bad Linear Algebra courses and a bit of the early stuff in Algebra.
From a "pragmatic skills" perspective, this approach is still highly suspect. E.g. no one's going to invent their way to Risch by implementing integration tricks.
Point is, every field teaches useful life skills / knowledge, and programming gets in the way as often or more often than it helps.
I encourage to my students to do the, so called, cheat, because its what the people in the real world do, people don't need to memorize a lot of topics and people use calculator, computer and excel. Sheesh with the old and rust model of learning.
It's a true problem, but the consequences are not visible yet.
It doesn't include the databases of structured information or the NLP-like interface though.
It's much easier to correct known unknowns than unknown unknowns.
The best advice I was ever given as a student was "Pretend like you'll get an automatic A on your report card and treat your grades as a feedback mechanism for figuring out what you do and do not understand. Just focus on learning." As a simple corollary, cheating is kinda silly.
This assumes you'll get any kind of personalized feedback and instruction in a lower division maths course. Good luck with that at a public university.
Er... which is a good thing, surely?
So many things (like math answers) are made so available with no effort.
There is of cause a level where you cannot just copy solutions but those tend to be badly covered by "facts centered" written exams anyway so why not build your exams around the reality that modern tools exist and will be used, instead of testing as if the world had not changed since the teachers left school.