For what it gets correct, it is the replacement for everything. Thought itself can be outsourced. Education's purpose isn't simply to have bodies in seats serving as proxies to a machine.
And if LLMs could fix the hallucination problem, then they are more in a position to replace education itself versus being a tool for the student.
I know these were big in the humanities, and where I came from, they were sometimes worth 30%-40% of the final grade.
This seems like setting the kids up for a rude awakening...and probably more mental health problems when they fall behind their peers, who had limited access to GPT and had to do things the old fashioned way...
Just my opinion...
I am from Canada, though, so that may make a difference.
We had these awful blue lined booklets where you put your student # on the front and just pretty much write for 2 hours.
You waited about 3-4 weeks to get your mark posted online.
I'm pretty sure it's still done that way today here...
My hand hurts just thinking about it...
> The students.
> Embracing innovation and adapting to modern educational tools can lead to more empowered and capable students, ready to face the challenges of the future.
Bullshit. Go train your own ChatGPT with only AI-generated content; see how well that performs at real-world tasks.
Calculators in the 80s weren't confidently wrong and they didn't engender their content with a tone. Furthermore, they didn't make mental math obsolete. We all have calculators in our pocket, but hardly anyone uses their phone to calculate change or ballpark a tip. Banning ChatGPT is the same expectation as trusting your students not to plagiarize an answer from another student or encyclopedia.
There are also videos of specific calculator errors:
https://www.youtube.com/watch?v=VqUVFylZpig
https://www.youtube.com/watch?v=Lrf55hUDs1g
https://www.youtube.com/watch?v=dkqqt9bVeNw
There are so many it isn't funny.
There is also two times in the 90's when Windows Calculator was confidently wrong. Once was because of Microsoft's code, and the other was because of a bug in Intel's math processor.
And then there are the cases of intentional errors that enable teachers to tell if a student used a calculator instead of working out problems on their own.
ChatGPT solves an instruction-tuned scenario given a set of textual constraints and similar assumptions. So far so good, right? Except English isn't math, and our tools for learning them are markedly different. Being able to coax the correct answer out of ChatGPT is no more impressive than faithfully plagiarizing the correct answer from a textbook.
> the cases of intentional errors that enable teachers to tell if a student used a calculator
ChatGPT responses are expressive enough that any non-trivial response (eg. > 1 sentence) could be enough to tell if a student is using a chatbot or not.
I don't think this is relatable in comparison. They weren't popular in relation to the errors of LLMs. Also, these are bugs and can be documented as such. Predictable behavior. LLMs are unpredictable. You can't document the wrong answers.