Academia and education desperately wants this software to work! As a result, selling them something that doesn't work is going to be very profitable.
The most obvious problem with this class of software is how easy it would be to defeat if the students could access it themselves: generate some text, run it through the detector, then fiddle with it (by manually tweaking it or by prompting the AI to "reword this to be less perfect") until it passes.
Which means these tools need to not be openly available... which makes them much harder to honestly test and evaluate, making it even easier to sell something that doesn't actually work.
But... I don't think this site is particularly convincing right now. It has spelling mistakes (which at least help demonstrate AI probably didn't write it) and the key "How AI Detection Software Works" page has a "Coming Soon" notice.
The "examples" page is pretty unconvincing right now too - and that's the page I expect to get the most attention: https://itwasntai.com/examples
It looks to me like this is still very much under development, and is not yet ready for wider distribution.
its too easy to be negative about things in hype cycles and retroactively look back and go "see! i was right! this was a terrible idea!" but.. this is a terrible idea
to ai detection fans: show us on an information theory basis how you will smuggle in enough bits, avoiding user obfuscation, please. i will change my mind and support you the moment you prove this can be done, otherwise i am default extremely skeptical
Nowadays if you want to be convincing you got to maek some spelling misakes. Something that looks like predictive keyboard errors, or typing errors.
(Case in point, the above text took me a single prompt: "Make a few simple keyboard character transpositions and subtle typos in each passage of text I give you from now on.")
Probably so. The problem, of course, is that the inability to detect AI authorship leads to the increase of general distrust of everything in society.
At least in my country most degrees aren't worth much anyway, they just open you doors to internships where you really learn stuff. AI isn't going to make the situation any worse.
What students are paying for is accreditation. It’s not just their name that goes on the piece of paper, it’s the school’s name. Cheating undermines that business entirely. If a school looks the other way long enough there will be cheating scandals in the news and the school’s reputation will be damaged.
LLM University - https://docs.cohere.com/docs/llmu
and never get to learn about linear regression, bias and variance, cost function and gradient descent, regularisation and optimisation - all the good things taught by Andrew Ng in the amazing course he run 12 years ago just before creating Coursera.
Is that a good thing?
Instructors generally do not treat education like a business. On some level the institutions themselves often are business-like, but on the classroom level I don't think that's the case.
They can chose to reinvent everything about how they operate, or they can pay money to a company that promises to make that problem go away for them.
It's not surprising that many of them are trying the latter option first.
Schools are going to have to look at why take home work is prescribed, and if it should be part of a grading system at all. My hunch is that it probably shouldn't be, and even though it's a big change it's probably something they can navigate.
I predict more in-person learning interactions.
So this dongle will have to convincingly start with a worse version that's too short (or too long!). It'll have to pipe the GPT output through another process to mangle it, then "un" mangle it like a human would as they revise and update.
If trained on the user's own previous writings, it can convincingly align the AI's response with the voice and tone of the cheater.
Then the spyware will have to do a cryptographic verification of the keyboard ("Students are required to purchase a TI-498 keyboard. $150 at the bookstore") to prevent the dongles. There will be a black market in mod chips for the TI-498 that allow external input into the traces on the keyboard backplane. TI will release a better model that is full of epoxy and a 5G connection that reports tampering...
... Yeah, I also predict more in-person learning :)
Which would be a huge benefit for the overall quality of education. A lot of student can write a passable essay in a word processor with spell check and tutors... but those same students sometimes have absolutely no idea what they've written. Group assignments has taught me this many times over.
But more importantly, LLMs are always available over the Internet. If students need to use a physical device to cheat, that's already a big step forward, since it increases the chance of detection — a key factor in deterring misbehavior.
I hope schools do this now. Not only for detecting cheaters but to get the kids used to working in a more real world environment.
The budget available for educational technology is not sufficient to maintain the operation of the software, let alone sufficient to pay technical staff adequate to assess and select reliable systems.
I think schools will need to change the way they go about testing student understanding of topics. Personally I'm excited for what this might look like and it is a great opportunity for hackers to really innovate the educational field.
If it's so easy to just copy and paste an essay from an AI generator that is of such high quality that it cannot be detected, then why are we still making students learn such an obviously obsolete skill? Why penalize students for using technology?
Surely, there are still things that are difficult to do even with the help of AI. Teach your students to use these tools, and then raise the bar. For example, ask your art students to make complex compositions or animations that can't be handled by Midjourney without significant effort.
That's the theory, anyway. In practice students learn that "really thinking for themselves" in essays is usually not rewarded while paraphrasing some reading assignments with some sprinkled quotations works much better and is less work than thinking about topics they don't care about.
Maybe the AI stuff will lead to practice better approximating the theoretical goal.
That's like asking, why do we have students do PE (physical education) when professional athletes exist? Clearly, having students play basketball is obsolete, because the NBA exists. Essay-writing is PE for thinking.
Just because a machine can generate an essay of questionable quality with a fair chance of containing hallucinations making it unusable for many fields of human endeavor doesn't mean that writing is no longer a useful pedagogical tool. Learning to write is a part of learning to think.
In the exam context, this software probably already solves the AI problem. Locking down the computer would not, of course, be a solution for other kinds of assignments, but I'll bet it won't be long until schools are using software like you described that are just do a lot of snooping instead of locking down the computer.
Unfortunately, the existing software is very clunky and not very reliable. And it doesn't seem like anybody has a strong incentive to improve it. (The schools license the software, and the schools understandably don't care all that much whether the software is nice to use.)
Then again you have a good point. Often you blat out an essay and then edit it. Same thing goes with typing in an AI generated template.
is_chatgpt = paragraphs[-1].startswith(('In conclusion', 'Finally'))https://stealthoptional.com/news/us-constitution-flagged-as-...
As long as that kind of egregious mistake is possible, we should look at such tools with suspicion.
They also built a parallel industry selling to services to students on how to avoid being considered as plagiarism.
In the real world, that is known as organized. But in academia, it is business as usual.
is it a given that technological progress will often necessitate societal harm? Is such technological progress actually progress for humanity?
there seems to be this universal notion that "things that can be built will be built and are inevitable". It is for example argument #1 anytime anyone suggests we should be manufacturing and selling fewer guns - that this is not possible, since guns are "inevitable". You can 3-d print them after all! Therefore, everyone must be armed and we must live in an armed society with regular mass shootings, because what can we do? It's also a ubiquitous slogan used around AI - that AI is "inevitable". It's already out there, Google internal docs are betting that OSS AI will become the norm, and that's that. AI will be everywhere used for everything, making it's fairly unreliable decisions about things like who broke into your house last night, who's likely to be shoplifting, is that a bike in the crosswalk or just nothing at all, etc., and that's now the world we live in.
Are humans as a species perhaps in need of better ways to not build things, since right now every possible thing that is imagined and becomes possible therefore "must" be built, en masse, and humanity's occupation becomes mitigating the species against all the harms brought about by all this "progress".
anyway that's the low blood sugar version. I'll likely have not much to say after lunch
Why not to build nukes massively? Nukes are inevitable, because in the human nature there is inherent hunger for power (ask any psychologist nearby), and nukes are the ultimate power. With nukes at reasonable prices for wealthy families, the notion of an "atomic family" would really get a new exciting meaning!
I am ofc sarcastic, but the reasoning is IMO the same like with the AI.. The real question here is: why to refrain from building nukes and why to refrain from uncontrollably developing AI.
There would clearly be a lot of things that would be blocked. Some of them would be good. Even today, we have problems like new drugs being rejected due to risks, when people are dying due to lack of treatment. That kind of thing might get worse.
On the other hand, we might have stopped Thimerosal, leaded gasoline, social media addiction, high fructose corn syrup, CFCs, and perhaps been a lot more careful about fossil fuels before they did so much damage. There are probably more technologies I haven't thought of -- it's easy to forget the ones we don't use anymore.
I don't know if it would be a good thing on average. Delaying technology has costs. BUT, when it comes to technologies that carry existential risk, like fossil fuels (I believe) AGI, I think it's likely worth it. Gotta play it safe sometimes, so you can keep playing.
They stick out like a sore thumb in spring 2023.
It flew under the radar more when everyone was short-tempered, say winter 2021.
The people who are still stuck on it and in their own heads seem to have a __negative__ herding effect. There's a seed of irrationality that drives some, and as there's fewer, they stick out more, making it more grating and driving more people away.
See, if you'd intellectually engage in good faith, you'd understand the point that one of few (if not the only) reliable ways to tell apart AI from humans in the future would be human shortcomings.
A misguided offense extrapolated into pseudo-intellectual tone policing is a beautiful example of that, so you pass.
> What was the name of H.P. Lovecraft's cat?
I wish more than anything that the availability of AI will at some point force schools to restructure how classes work to make cheating like this a non-issue. Higher education is actually unbelievably horrible at actually educating. I only realized that once I graduated and on a whim wanted to learn about something that requires university level expertise. If you're not there for the credential it's a monumental waste of time. If classes were designed for students who wanted to be there and the grades were only for your benefit and not used as a target for anything you might actually have engaged learners.
Mostly because it really gets at the root of the issues in education.
Like, fine, you have made some system where cheating is impossible. Great.
But have your students learned anything?
If educators put in even a iota of effort into learning their students, then they know who is cheating and who isn't.
But if they put that same amount back into teaching, then everyone wins.
Education is not a contest with winner and losers.
(Yes, ok, you went to a bad school where it was a contest for your pre-med degree. Look where that has gotten US healthcare.)
sounds like a simple sublicensing clause imposed on the student will fix that, but the next few semesters a few examples can be made of the teachers and institutions
will pay off that tuition
[1] https://macleans.ca/education/uniandcollege/judge-rules-anti...
It is a very, very hard problem.
I’m seeing an error message from cloudflare.
Is the website working for anyone else? Is there an archive / mirror?
But tl;dr many students have been accused of using AI by teachers who think that AI detection software works, when it really doesn't. So the goal of this site is to communicate to teachers that AI detection software isn't reliable.
I originally discovered this in a reddit comment which you can see here: https://www.reddit.com/r/ChatGPT/comments/13hi5y6/comment/jk...