I don't see the whole AI topic as a large crisis, as others have mentioned: put more emphasis on in-person tests and exams. Make it clear that homework assignments are for practice, learning, and feedback. If a person thinks that copy/pasting helps them, give them the freedom to so, but if as a result they fail the exams and similar in-person evaluations, then so be it. Let them fail.
I would like to hire students who actually have skills and know their material.
Or even better, if AI is actually the amazing learning tool many claim then it should enhance their learning and as a result help them succeed in tests without any AI assistance. If they can't, then clearly AI was a detriment to them and their learning and they lack the ability to think critically about their own abilities.
If everyone is supposed to use AI anyway, why should I ever prefer a candidate who is not able to do anything without AI assistance over someone who can? And if you hold the actual opinion that proper ai-independent knowledge is not required, then why should I hire a student at all instead of buying software solutions from AI companies (and maybe put a random person without a relevant degree in front of it)?
apatheticonion 10 hours ago [-]
It's a huge problem. I have several friends in university who have had assignments flagged as AI. They have had entire units failed and forced to retake semesters which is not cheap.
Even if you fight it, the challenge goes into the next semester and pushes out your study timeline and associated costs.
> put more emphasis on in-person tests and exams. Make it clear that homework assignments are for practice, learning, and feedback. If a person thinks that copy/pasting helps them
Works for high school, not so much for university degrees. What's crazy is universities have an incentive to flag your work as AI generated as it forces the student to pay more money and is difficult to challenge.
One friend now uses a dashcam to record themselves when writing an assignment so they can prove no AI was used when they are eventually flagged.
2pEXgD0fZ5cF 10 hours ago [-]
Yeah bad choice of words on my part, I apologize. I can imagine that things are pretty chaotic right now and that there are quite a few problems like the one you describe. When I said I don't see a crisis here I meant that more in a more overarching sense and that I see this as solvable.
> Works for high school, not so much for university degrees.
I don't know about that. I can't speak for the US, but at the university where I got my degrees (Math & CS) and later worked prerequisite in-person tests to be allowed to take a given exam were not rare. Most modules had lectures (professor), tutorials (voluntary in-person bonus exercises and tutors to ask questions) and exercise groups where solutions to mandatory exercises were discussed. In the latter sometimes an additional part of the exam requirements was to present and explain a solution at least once or twice over the course of the semester. And some had small, mandatory bi-weekly tests as part of the requirement too.
Obviously I can understand that this would not work equally well in each kind of academic programme.
apatheticonion 10 hours ago [-]
> Yeah bad choice of words on my part, I apologize.
All good!
> I can't speak for the US
I just had to respond to this as the implication of being American touched a nerve, haha. Australian here.
Swizec 9 hours ago [-]
> > put more emphasis on in-person tests and exams. Make it clear that homework assignments are for practice, learning, and feedback. If a person thinks that copy/pasting helps them
> Works for high school, not so much for university degrees. What's crazy is universities have an incentive to flag your work as AI generated as it forces the student to pay more money and is difficult to challenge.
When I started uni (slovenia, 2007) the rules were simple: You are adults. The final exam (written + oral) is 100% of your grade. We don’t have the time or willingness to police what you do. Strongly recommend attending classes and doing homework but whatever it’s your life. If you get high enough scores on the optional midterms, you can skip the written portion of the exam.
It was pretty great. Yes we all tried to cram for exams at the last moment. No it didn’t work very well. Needing 2 or 3 tries to pass was common.
Then later we got the bologna system. Professors stopped bragging about fail rates. Students passing became an actual thing they were evaluated on. Homework became graded, midterms were mandatory and part of your grade, attendance was tracked, etc.
College became like high school. More people passed but I think something was lost about teaching adulthood.
For the record: I didn’t graduate. My freelance business got too busy and I could not keep up with both.
ckcheng 9 hours ago [-]
> more emphasis on in-person tests and exams
$$$
There’s a lot of interacting parts as to why many places have arrived where we are where cheap ghost writers (AI or not) can so easily negatively impact education. But it pretty much all comes down to costs.
whattheheckheck 9 hours ago [-]
Go ahead and let a random person do it. Degrees were gate keeping anyway
threemux 11 hours ago [-]
In person, proctored blue book exams are back! Sharpen those pencils kids.
I've been wondering lately if one of the good things to come out of heavy LLM use will be a return to mostly in-person interactions once nothing that happens online is trustworthy anymore.
bambax 11 hours ago [-]
Yes! This "problem" is really easy to fix with in person exams and no computers in class, ever.
armchairhacker 9 hours ago [-]
There should be computers, just locked down ones that don’t leave the classroom. With today’s tuitions, colleges can afford a computer for every student.
Writing code on paper is frustrating to the point where, beyond small algorithms, it’s probably not an effective metric (to test performance on real-world tasks). I think even essays may not be as good a metric for writing quality when written vs typed, although the difference is probably smaller. Because e.g. being able to insert a line in the middle of the text, or find-and-replace, are much harder. Also, some people (like me) are especially bad at handwriting: my hand hurts after writing a couple paragraphs, and my handwriting is illegible to most people. While some people are especially bad at typing, they get accommodations like an alternative keyboard or dictation, whereas the accommodation for bad handwriting is…a computer (I was fortunate to get one for exams in the 2010s).
meroes 11 hours ago [-]
This is the “back to office” of education. It is not a one size fits all solution. There are so many remote and hybrid classes now you guys sound outdated.
analog31 11 hours ago [-]
That’s fair, but at the same time, expecting any learning to occur in remote classes, when fair evaluation is impossible, may also be outdated.
kbelder 11 hours ago [-]
Learning is just as easy remote and with AI, maybe easier. It's testing and evaluation of that learning that's difficult.
Universities make money not by teaching, but by testing and certifying. That's why AI is so disruptive in that space.
analog31 10 hours ago [-]
Universities don’t make money.
Granted, I’m 62, so I’m from the old world. I attended college, and taught a couple of college classes, before the AI revolution. There was definitely a connection between learning and evaluation for most students. In fact most students preferred more evaluation, not less, such as graded quizzes and homeworks rather than just one great big exam at the end. Among other things, the deadlines and feedback helped them budget their efforts. Also, the exercise of getting something right and hitting a deadline is not an overt purpose of education, but has a certain pragmatic value.
Again, showing my age, in the pre-AI era, the technology of choice was cheating. But there were vanishingly few students who used cheating to circumvent the evaluations while actually learning anything from their courses.
If teaching and certifying could be separated, they would be. In fact, it has happened to some extent for computer programming, hence the “coding interview” and so forth. But computer programming is also an unusual occupation in that it’s easy to be self taught, and questionable whether it needs to be taught at the college level.
10 hours ago [-]
bambax 7 hours ago [-]
You don't need uni to watch youtube; you can do that on your own, for free. "Remote classes" are obviously a scam.
seanmcdirmid 11 hours ago [-]
Until they need to start learning how to use them to get a job in the modern world?
There should be a class that teaches you how to use AI to get things done, especially judging on how many even on HN admit they aren’t good at it.
Ekaros 9 hours ago [-]
Is there even a point until field properly stabilise? Even with more fundamental stuff there is complaints that material is outdated. And even AI proponents seem to tell that things are still evolving and you need to do something in new way regularly.
seanmcdirmid 8 hours ago [-]
If the tech is already good enough to cheat with? Ya, I think the kids are ready to learn it, even if just keeps improving in the coming years. It also helps you reflect on the process of doing something when you instruct someone else to do it for you. Writing a good essay and getting AI to write a good essay for you are both useful things to do as students.
idle_zealot 11 hours ago [-]
But is that webscale?
OsamaJaber 12 hours ago [-]
AI detectors punishing non native English speakers for writing too cleanly is the part nobody talks about enough -_-
Rexxar 11 hours ago [-]
For example, native English speakers often make phonetic spelling errors (such as its/it’s, your/you’re) that non-native English speakers usually avoid. It’s probably a sign that someone speaks more fluently when he starts making these types of mistakes from time to time.
Tade0 10 hours ago [-]
Or picked up English before they learned to read and write properly.
I'm cursed with this as I was put in an international environment right before turning five, went back to my home country to start grade school and only in fifth grade started having English classes.
tempaccountabcd 10 hours ago [-]
[dead]
HeavyStorm 11 hours ago [-]
This ship has sailed.
It's how it was with the internet. I grew up in the 90s, and teacher didn't know how to deal with the fact we no longer had to go through multiple books in the library to get the information they needed. We barely needed to write it.
Now nobody expects students to not use the internet. Same here: teachers must accept that AI can and will write papers and answer questions / do homework. How you test student must be reinvented.
xeromal 11 hours ago [-]
I know a lot of teachers are reverting back to handwritten papers. People can generate it but at least you're doing something.
randall 11 hours ago [-]
this is like irl cryptographic signatures for content lol
idiotsecant 11 hours ago [-]
This is just about the worst possible response it seems. It manages to probably hurt some wrists not used to long handwriting sessions, completely avoid learning how to use and attribute AI responsibly, and still probably just results in kids handwriting AI generated slop, anyway.
singpolyma3 11 hours ago [-]
While I don't think it's the right solution, it will force them to at least read what they're submitting which means some learning :)
ethin 11 hours ago [-]
It also disadvantages people with disabilities. How exactly are they supposed to do these papers and tests? Dictate everything to someone else, using Blindness as an example? Because that seems very very inefficient and extremely error-prone.
ThrowawayR2 10 hours ago [-]
As someone with an actual visual impairment, please do not attempt to use my affliction to justify generalized use of AI. Educational assistance for those with disabilities is not a new thing; AI is likely going to have a role but how remains exactly to be seen.
ethin 7 hours ago [-]
As someone who myself is legally blind, I am in no way justifying the use of AI like this. I was responding to the entire "let's all go back to actual paper-based tests/assignments" trope that was being trotted out on here. Sure, it (might) work, but it also disadvantages people like us, since most teachers can't read braille (at least, none of mine could).
BugsJustFindMe 11 hours ago [-]
> It manages to probably hurt some wrists not used to long handwriting sessions
I'm sorry but, lmao. You cannot be serious.
> attribute AI
Oh no!
> still probably just results in kids handwriting AI generated slop
Not if they're doing it in person. And at least they then need to look at it.
jxf 11 hours ago [-]
We've been writing with our hands for thousands of years. I suspect that on balance a Butlerian Jihad against AI slop would be perfectly fine for our hands.
TomasBM 7 hours ago [-]
There is an obvious reason why LLM use should be discouraged in classwork focused on writing: the process that's needed for a brain to learn the skills can't be outsourced.
The Internet is different. Even with access to websites like Wikipedia, you had to write your own content. Plagiarism was easily detectable.
We shouldn't confuse "we don't have a solution at the moment" with "we should completely abandon no-LLM education". Like with social media, we can always change the direction of progress.
smoyer 11 hours ago [-]
When I was in high school, we were not allowed to use calculator for most science classes ... And certainly not for math class. I'm ten years, will you want to hire a student who is coming out of college without considerable experience and practice with AI?
AlotOfReading 11 hours ago [-]
LLMs work best when the user has considerable domain knowledge of their own that they can use to guide the LLM. I don't think it's impossible to develop that experience if you've only used LLMs, but it requires a very unusual level of personal discipline. I wouldn't bet on a random new grad having that. Whereas it's pretty easy to teach people to use LLMs.
nkrisc 11 hours ago [-]
If all they know is AI, and they supplanted all their learning with AI, why even hire them? Just use the AI.
paulryanrogers 10 hours ago [-]
Kids need to learn the fundamentals first and best. They can learn the tools near the end of school or even on the job.
I loved computer art and did as many technical art classes at university as I could. At the beginning of the program I was the fastest in the class, because we were given reference art to work from to learn the tools. By the end of the class I couldn't finish assignments because I wasn't creative enough to work from scratch. Ultimately I realized art wasn't my calling, despite some initial success.
Other kids blew me away with the speed of their creations. And how they could detach emotionally from any one piece, to move on to the next.
hazbot 10 hours ago [-]
Yes, it is much easier to train someone to use AI than to train them to have sufficiently baked-in math and language skills to be able to leverage the AI.
ThrowawayR2 11 hours ago [-]
Should I, by some miracle, be hiring, I'd be hiring those who come out of college with a solid education. As many have pointed out, AI is not immune to the "garbage in, garbage out" principle and it's education that enables the user to ask informed and precisely worded questions to the AI to get usable output instead of slop.
croes 11 hours ago [-]
Why would I want to hire such a student?
What makes him better the better pick than all the other students using AI or all the other non-students using AI?
croes 11 hours ago [-]
This is not how AI will be in the future.
At some point the will have to make profit, that will shape AI.
Either by higher prices or ads.
Both will change the use of AI
AndrewKemendo 11 hours ago [-]
I remember when websites couldn’t be considered valid sources for graded assignments
MattGaiser 11 hours ago [-]
I was dealing with this even in 2014 when I was in high school. Even then, entire classes of government data weren’t published in some print volume.
AndrewKemendo 11 hours ago [-]
In my case at least there was some validity to it in 1995
gotrythis 11 hours ago [-]
I put a day of careful thought into writing a cover letter for a job a few weeks ago. Knowing there is there was the potential of AI screening, I checked if it would get flagged.
Every detection program I tried said the letter that I personally wrote by hand was 100% AI generated!
So, I looked for humanizer programs and ran my cover letter through a couple. Without the results in front of me at the moment, I can only revert to my judgemental conclusion instead of solid observations...
You need to write like an idiot to pass AI detection algorithms. The rewritten cover letter was awful, unprofessional, and embarrassing.
zkmon 11 hours ago [-]
Teachers are also heavy users of AI. Entire academic business staff is using AI.
The goals of academic assessment need to change. What are they assessing and why? Knowledge retention skills? Knowledge correlations or knowledge expression skills? None of these going to be useful or required from humans. Just like the school kids are now allowed to use calculators in the exam halls.
The academic industry need to redefine their purpose. Identify the human abilities that are needed for the future that is filled with AI and devices. Teach that and assess that.
kyykky 11 hours ago [-]
Teaching is about moving our knowledge (the stuff we’ve collectively learned ourselves, from others and our parents [instead of everyone needing to find out on their own]) to the next generation. While some skills may become obsolete in some parts of professional life due to AI, the purpose of academia does not change much.
zkmon 10 hours ago [-]
> the purpose of academia does not change much.
The purpose did change a lot. During Greeks time, purpose was pure knowledge or geometer skills. Industrial revolution and office work changed the purpose of the education to produce clerical staff. With AI, the purpose changes again. You need skills in using the AI and devices.
Espressosaurus 11 hours ago [-]
A calculator is more consistent and faster at calculating than I am, but I still need to understand how to multiply, divide, add, and subtract before I can move on to more complicated math. I need to intuitively understand when I'm getting a garbage result because I did an operation wrong, moved a decimal place by accident, or other problem.
Memorization has a place, and is a requirement for having a large enough knowledge base that you can start synthesizing from different sources and determining when one source is saying something that is contradicted by what should be common knowledge.
Unless your vision of the future is the humans in WALL-E sitting in chairs while watching screens without ever producing anything, you should care about education.
zkmon 10 hours ago [-]
> A calculator is more consistent and faster at calculating than I am, but I still need to understand how to multiply, divide, add, and subtract..
Exactly. If the calculator knows what to do and how to do, you just need to be able to specify a high level goal, instead of worrying about whether to add or multiply.
Espressosaurus 9 hours ago [-]
I need to be able to tell when it's garbage in garbage out and I can't do that when I don't understand the operations in question.
ThrowawayR2 11 hours ago [-]
> "Knowledge retention skills? Knowledge correlations or knowledge expression skills? None of these going to be useful or required from humans."
I'm fascinated by these claims from some LLM advocates that people will no longer need to know things, think, or express themselves properly. What value then will such individuals bring to the table to justify their pay? Will they be like Sigourney Weaver's character in Galaxy Quest whose sole function was to repeat what the computer says verbatim? Will they be like Tom Smykowski in Office Space indignantly saying "I have people skills; I am good at dealing with people! Can't you understand that?!" Somebody, please explain.
[EDIT] The other funny aspect about these claims is, given that such an individual's skills are mainly in using an AI, that they can simply be outspent by their peers on AI usage. "Wally got the job instead of me because he paid for a premium LLM to massage his application and I could only afford the basic one because I'm short on money."
zkmon 10 hours ago [-]
> What value then will they bring to the table to justify their pay?
They are skillful in shepherding a population of AI agents towards goals.
10 hours ago [-]
croes 11 hours ago [-]
Teachers are also heavy users of solution books, but would not give them to students for this reason.
ashleyn 12 hours ago [-]
I'm guessing this "humanizer" actually does two things:
* grep to remove em dashes and emojis
* re-run through another llm with a prompt to remove excessive sycophantry and invalid url citations
emmp 12 hours ago [-]
For student assignment cheating, only really the em dashes would still be in the output. But there are specific words and turns of phrases, specific constructions (e.g., 'it's not just x, but y'), and commonly used word choices. Really it's just a prim and proper corporate press release style voice -- this is not a usual university student's writing voice. I'm actually quite sure that you'd be able to easily pick out a first pass AI generated student assignment with em dashes removed from a set of legitimate assignments, especially if you are a native English speaker. You may not be able to systematically explain it, but your native speaker intuition can do it surprisingly well.
What AI detectors have largely done is try to formalize that intuition. They do work pretty well on simple adversaries (so basically, the most lazy student), but a more sophisticated user will do first, second, third passes to change the voice.
dbg31415 12 hours ago [-]
You’re absolutely right!
Ha. Every time an AI passionately agrees with me, after I’ve given it criticism, I’m always 10x more skeptical of the quality of the work.
glitchcrab 12 hours ago [-]
Why? The AI is just regurgitating tokens (including the sycophancy). Don't anthropomorphise it.
20260126032624 11 hours ago [-]
Because of the way regurgitation works. "You're absolutely right" primes the next tokens to treat whatever preceded that as gospel truth, leaving no room for critical approaches.
otikik 11 hours ago [-]
Because I was only 55% sure my comment was correct and the AI made it sound like it was the revelation of the century
the_fall 12 hours ago [-]
No. No one is looking for em-dashes, except for some bozos on the internet. The "default voice" of all mainstream LLMs can be easily detected by looking at the statistical distribution of word / token sequences. AI detector tools work and have very low false negatives. They have some small percentage of false positives because a small percentage of humans pick up the same writing habits, but that's not relevant here.
The "humanizer" filters will typically just use an LLM prompted to rewrite the text in another voice (which can be as simple as "you're a person in <profession X> from <region Y> who prefers to write tersely"), or specifically flag the problematic word sequences and ask an LLM to rephrase.
They most certainly don't improve the "correctness" and don't verify references, though.
smrtinsert 10 hours ago [-]
providers are also adding hidden characters and attempting to watermark if memory serves.
the_fall 10 hours ago [-]
It's more complex than that. It's called SynthID-text and biases the probabilities of token generation in a way that can be recovered down the line.
grahamburger 11 hours ago [-]
I've heard some teachers are assigning their students to 'grade' a paper written by LLM. The students use an LLM to generate a paper on the topic, print it out, then notate in the margins by hand where it's right and wrong, including checking the sources.
postepowanieadm 12 hours ago [-]
The Washing-Machine Tragedy was a prophecy.
falloutx 12 hours ago [-]
At some point, writing 2 sentences by hand will become more acceptable than this.
vyskocilm 10 hours ago [-]
We have been banned from using Ami Pro for writing a technical reports on my highschool in late 90s and required to write them by hand as an attempt to combat copy&pasting. We "shared" all the calculations in the class anyway, so it haven't worked well.
pinnochio 12 hours ago [-]
Shortly after, AI-powered prosthetic hands that mimic your handwriting will write those 2 sentences for you.
mc32 11 hours ago [-]
I get that students are using the LLM crutch -and who wouldn’t?
What I don’t get is why wouldn’t they act like an editor and add their own voice to the writing. The heavy lifting was done now you just have to polish it by hand. Is that too hard to do?
BugsJustFindMe 11 hours ago [-]
Humans tend to be both lazy and stupid and are always looking for ways to pass by with minimal effort. Kids aren't different just because they're in school.
Ekaros 9 hours ago [-]
Those looking for shortcuts look for easiest and fastest and least effort shortcuts. Also unlike when they are themselves doing the copying and get the feeling I really should change things so I won't get caught, same won't happen when something is generated for them. As it probably looks unique enough.
MattGaiser 11 hours ago [-]
It would be dull to do. Being a tone scribe would be terrible.
tgrowazay 12 hours ago [-]
Everyone knows about emdashes, but there are so much more!
I used em dashes heavily 15 years ago when writing my PhD thesis.
singpolyma3 11 hours ago [-]
So did every author of classic literature. People who think they can spot AI writing by simple stylistic indicators alone are fooling themselves and hurting real human authors
A_D_E_P_T 10 hours ago [-]
It's because LLMs were trained on classic literature that they began to use em-dashes in their now-famous manner.
Seriously, highbrow literature is heavily weighted in their training data. (But the rest is Reddit, etc.) This really explains a lot, I think.
zeroonetwothree 11 hours ago [-]
Let’s just say when my coworkers started sending emails with tons of bold and bullet points when they had never done that before I felt pretty justified in assuming they used AI
SecretDreams 12 hours ago [-]
Same, but 5 years ago. Now they're ruined for me lol.
kbelder 10 hours ago [-]
My fear is people will actually take that article to heart, and begin accusing people of posting AI simply for using all sorts of completely valid phrases in their writing. None of those AI tells originated with AI.
AstroBen 11 hours ago [-]
I saw someone created a skill to weaponize that exact list to humanize the AI's output
There are no clear signs, at least for anyone who cares to hide them
yarrowy 12 hours ago [-]
just move to 2 hour in class writing blocks.
TZubiri 12 hours ago [-]
[flagged]
OutOfHere 11 hours ago [-]
The fraud is all around, also when honest writers get falsely accused of using AI when they didn't, which is the point. It's why everyone needs humanizers. The risk of not using them is too high for everyone. As such, the fraud I see above all is the one in your comment.
TZubiri 10 hours ago [-]
huh
Rendered at 06:46:54 GMT+0000 (Coordinated Universal Time) with Vercel.
I would like to hire students who actually have skills and know their material. Or even better, if AI is actually the amazing learning tool many claim then it should enhance their learning and as a result help them succeed in tests without any AI assistance. If they can't, then clearly AI was a detriment to them and their learning and they lack the ability to think critically about their own abilities.
If everyone is supposed to use AI anyway, why should I ever prefer a candidate who is not able to do anything without AI assistance over someone who can? And if you hold the actual opinion that proper ai-independent knowledge is not required, then why should I hire a student at all instead of buying software solutions from AI companies (and maybe put a random person without a relevant degree in front of it)?
Even if you fight it, the challenge goes into the next semester and pushes out your study timeline and associated costs.
> put more emphasis on in-person tests and exams. Make it clear that homework assignments are for practice, learning, and feedback. If a person thinks that copy/pasting helps them
Works for high school, not so much for university degrees. What's crazy is universities have an incentive to flag your work as AI generated as it forces the student to pay more money and is difficult to challenge.
One friend now uses a dashcam to record themselves when writing an assignment so they can prove no AI was used when they are eventually flagged.
> Works for high school, not so much for university degrees.
I don't know about that. I can't speak for the US, but at the university where I got my degrees (Math & CS) and later worked prerequisite in-person tests to be allowed to take a given exam were not rare. Most modules had lectures (professor), tutorials (voluntary in-person bonus exercises and tutors to ask questions) and exercise groups where solutions to mandatory exercises were discussed. In the latter sometimes an additional part of the exam requirements was to present and explain a solution at least once or twice over the course of the semester. And some had small, mandatory bi-weekly tests as part of the requirement too.
Obviously I can understand that this would not work equally well in each kind of academic programme.
All good!
> I can't speak for the US
I just had to respond to this as the implication of being American touched a nerve, haha. Australian here.
> Works for high school, not so much for university degrees. What's crazy is universities have an incentive to flag your work as AI generated as it forces the student to pay more money and is difficult to challenge.
When I started uni (slovenia, 2007) the rules were simple: You are adults. The final exam (written + oral) is 100% of your grade. We don’t have the time or willingness to police what you do. Strongly recommend attending classes and doing homework but whatever it’s your life. If you get high enough scores on the optional midterms, you can skip the written portion of the exam.
It was pretty great. Yes we all tried to cram for exams at the last moment. No it didn’t work very well. Needing 2 or 3 tries to pass was common.
Then later we got the bologna system. Professors stopped bragging about fail rates. Students passing became an actual thing they were evaluated on. Homework became graded, midterms were mandatory and part of your grade, attendance was tracked, etc.
College became like high school. More people passed but I think something was lost about teaching adulthood.
For the record: I didn’t graduate. My freelance business got too busy and I could not keep up with both.
$$$
There’s a lot of interacting parts as to why many places have arrived where we are where cheap ghost writers (AI or not) can so easily negatively impact education. But it pretty much all comes down to costs.
I've been wondering lately if one of the good things to come out of heavy LLM use will be a return to mostly in-person interactions once nothing that happens online is trustworthy anymore.
Writing code on paper is frustrating to the point where, beyond small algorithms, it’s probably not an effective metric (to test performance on real-world tasks). I think even essays may not be as good a metric for writing quality when written vs typed, although the difference is probably smaller. Because e.g. being able to insert a line in the middle of the text, or find-and-replace, are much harder. Also, some people (like me) are especially bad at handwriting: my hand hurts after writing a couple paragraphs, and my handwriting is illegible to most people. While some people are especially bad at typing, they get accommodations like an alternative keyboard or dictation, whereas the accommodation for bad handwriting is…a computer (I was fortunate to get one for exams in the 2010s).
Universities make money not by teaching, but by testing and certifying. That's why AI is so disruptive in that space.
Granted, I’m 62, so I’m from the old world. I attended college, and taught a couple of college classes, before the AI revolution. There was definitely a connection between learning and evaluation for most students. In fact most students preferred more evaluation, not less, such as graded quizzes and homeworks rather than just one great big exam at the end. Among other things, the deadlines and feedback helped them budget their efforts. Also, the exercise of getting something right and hitting a deadline is not an overt purpose of education, but has a certain pragmatic value.
Again, showing my age, in the pre-AI era, the technology of choice was cheating. But there were vanishingly few students who used cheating to circumvent the evaluations while actually learning anything from their courses.
If teaching and certifying could be separated, they would be. In fact, it has happened to some extent for computer programming, hence the “coding interview” and so forth. But computer programming is also an unusual occupation in that it’s easy to be self taught, and questionable whether it needs to be taught at the college level.
There should be a class that teaches you how to use AI to get things done, especially judging on how many even on HN admit they aren’t good at it.
I'm cursed with this as I was put in an international environment right before turning five, went back to my home country to start grade school and only in fifth grade started having English classes.
It's how it was with the internet. I grew up in the 90s, and teacher didn't know how to deal with the fact we no longer had to go through multiple books in the library to get the information they needed. We barely needed to write it.
Now nobody expects students to not use the internet. Same here: teachers must accept that AI can and will write papers and answer questions / do homework. How you test student must be reinvented.
I'm sorry but, lmao. You cannot be serious.
> attribute AI
Oh no!
> still probably just results in kids handwriting AI generated slop
Not if they're doing it in person. And at least they then need to look at it.
The Internet is different. Even with access to websites like Wikipedia, you had to write your own content. Plagiarism was easily detectable.
We shouldn't confuse "we don't have a solution at the moment" with "we should completely abandon no-LLM education". Like with social media, we can always change the direction of progress.
I loved computer art and did as many technical art classes at university as I could. At the beginning of the program I was the fastest in the class, because we were given reference art to work from to learn the tools. By the end of the class I couldn't finish assignments because I wasn't creative enough to work from scratch. Ultimately I realized art wasn't my calling, despite some initial success.
Other kids blew me away with the speed of their creations. And how they could detach emotionally from any one piece, to move on to the next.
At some point the will have to make profit, that will shape AI.
Either by higher prices or ads. Both will change the use of AI
Every detection program I tried said the letter that I personally wrote by hand was 100% AI generated!
So, I looked for humanizer programs and ran my cover letter through a couple. Without the results in front of me at the moment, I can only revert to my judgemental conclusion instead of solid observations...
You need to write like an idiot to pass AI detection algorithms. The rewritten cover letter was awful, unprofessional, and embarrassing.
The goals of academic assessment need to change. What are they assessing and why? Knowledge retention skills? Knowledge correlations or knowledge expression skills? None of these going to be useful or required from humans. Just like the school kids are now allowed to use calculators in the exam halls.
The academic industry need to redefine their purpose. Identify the human abilities that are needed for the future that is filled with AI and devices. Teach that and assess that.
The purpose did change a lot. During Greeks time, purpose was pure knowledge or geometer skills. Industrial revolution and office work changed the purpose of the education to produce clerical staff. With AI, the purpose changes again. You need skills in using the AI and devices.
Memorization has a place, and is a requirement for having a large enough knowledge base that you can start synthesizing from different sources and determining when one source is saying something that is contradicted by what should be common knowledge.
Unless your vision of the future is the humans in WALL-E sitting in chairs while watching screens without ever producing anything, you should care about education.
Exactly. If the calculator knows what to do and how to do, you just need to be able to specify a high level goal, instead of worrying about whether to add or multiply.
I'm fascinated by these claims from some LLM advocates that people will no longer need to know things, think, or express themselves properly. What value then will such individuals bring to the table to justify their pay? Will they be like Sigourney Weaver's character in Galaxy Quest whose sole function was to repeat what the computer says verbatim? Will they be like Tom Smykowski in Office Space indignantly saying "I have people skills; I am good at dealing with people! Can't you understand that?!" Somebody, please explain.
[EDIT] The other funny aspect about these claims is, given that such an individual's skills are mainly in using an AI, that they can simply be outspent by their peers on AI usage. "Wally got the job instead of me because he paid for a premium LLM to massage his application and I could only afford the basic one because I'm short on money."
They are skillful in shepherding a population of AI agents towards goals.
* grep to remove em dashes and emojis
* re-run through another llm with a prompt to remove excessive sycophantry and invalid url citations
What AI detectors have largely done is try to formalize that intuition. They do work pretty well on simple adversaries (so basically, the most lazy student), but a more sophisticated user will do first, second, third passes to change the voice.
Ha. Every time an AI passionately agrees with me, after I’ve given it criticism, I’m always 10x more skeptical of the quality of the work.
The "humanizer" filters will typically just use an LLM prompted to rewrite the text in another voice (which can be as simple as "you're a person in <profession X> from <region Y> who prefers to write tersely"), or specifically flag the problematic word sequences and ask an LLM to rephrase.
They most certainly don't improve the "correctness" and don't verify references, though.
What I don’t get is why wouldn’t they act like an editor and add their own voice to the writing. The heavy lifting was done now you just have to polish it by hand. Is that too hard to do?
Here is a wiki article with all common tell-tales of AI writing: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Seriously, highbrow literature is heavily weighted in their training data. (But the rest is Reddit, etc.) This really explains a lot, I think.
There are no clear signs, at least for anyone who cares to hide them