The American education system has a big and growing problem – students are increasingly using artificial intelligence (AI) as a substitute for actual learning, and teachers aren’t quite sure what to do about it.
IBM defines AI as “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.” AI technology, for example, can be used to “better interpret imaging results” and process doctor’s notes, according to Harvard Medical School.
When AI-powered tools like ChatGPT first burst onto the scene a few years ago, they held plenty of promise for education. And indeed, in the right application, AI can be a powerful tool to enable personalized learning and tutoring and to provide instant feedback for students.
But more and more students are also using it to write papers and even cheat on tests as more schools move to online exams.
“All you have to do is copy and paste the multiple choice questions, or take a picture of it so it converts from chat to text, and then paste it into ChatGPT, and out of the multiple answers it gives you the right one and explains why,” one university student recently told The College Fix. The College of Staten Island student admitted he received an “A” on two different final exams after using ChatGPT.
New research has confirmed AI programs such as ChatGPT can be used to pass even complicated courses such as engineering. University of Illinois researchers “found that ChatGPT earned a passing grade in the course without much prompt engineering, but the chat bot didn’t demonstrate understanding or comprehension of high-level concepts,” according to a May 19 Inside Higher Ed article. In another recent development, Google’s Gemini 2.5 AI scored a 49 percent on the Math Olympiad test – better than 75 percent of the students who took the test, a population comprised of the top young math minds in the country.
Students are also using AI to write papers, creating real concerns that schools are producing graduates with no real literacy skills. The plagiarism detection site Turnitin.com reviewed more than 200 million papers written by high school and college students and found that millions of those papers were likely generated primarily through AI. Another study reviewed 10,000 college scholarship essays and found that 42 percent had likely been composed with the help of generative AI.
In some cases AI-written papers are easy to spot, as AI programs have been known to generate “hallucinated” or “ghost” citations, conjuring fake sources out of thin air. ChatGPT, for example, created fake stories of professors being accused of sexual assault, and cited legitimate news outlets. In another rather ironic case, a self-proclaimed “disinformation” “expert” created a legal filing which included fake citations after ChatGPT filled in placeholders, drawing a rebuke from a Minnesota judge.
At New York University, one professor said he “AI-proofed his assignments,” according to the Daily Mail. The teacher found out his students were using AI when one asked for an extension on an assignment “because ‘ChatGPT was down the day the assignment was due,’” the Daily Mail reported.
However, as AI technology advances, the problem of students relying on computers to do their thinking for them is set to only get worse. Just a few years ago, AI could only produce cartoonish approximations of images and videos. Now, hyper-realistic AI videos are all over the internet. The same sort of advancements are occurring with written material, increasingly blurring the line between human-generated and AI-generated content.
But it is not just professors battling against their own students using AI – sometimes the clash occurs the other way.
A Northeastern University student recently requested a refund from her school after finding out her professor used AI in a contradiction of his own policy.
Ella Stapleton dug into her professor’s materials “and discovered other telltale signs of AI: distorted text, photos of office workers with extraneous body parts and egregious misspellings,” according to The New York Times.
“He’s telling us not to use it, and then he’s using it himself,” she said. Students, the Times points out, “make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free.”
The instructor apologized for how he used AI and said he will be more transparent in the future. Stapleton did not receive the refund.
These stories illustrate the need for a more robust conversation about AI’s place in education and how to guard against both students and faculty taking advantage of it. The University of Pennsylvania has offered some guidance here, saying that faculty and students should “be transparent” and “disclose” how AI was used. Northern Illinois University has other policies which outline positive ways to use AI, such as proofreading papers, brainstorming ideas, or “finding information.”
Ultimately, however, schools may be required to change their teaching models to adapt to the proliferation of AI. The reality is that overtures to integrity and “transparency” likely don’t stand much of a chance against the allure of a website that can do a student’s work for them in seconds.
Ironically, the answer may be a return to more traditional schooling methods – hand-written assignments, in-person tests on paper, and getting cell phones and other electronics out of classrooms. To preserve genuine learning and intellectual growth, schools may need to unplug from the very technologies once hailed as the wave of the future.
AMAC Newsline contributor Matt Lamb is an associate editor for The College Fix. He previously worked for Students for Life of America, Students for Life Action, and Turning Point USA. He previously interned for Open the Books. His writing has also appeared in the Washington Examiner, The Federalist, LifeSiteNews, Human Life Review, Headline USA, and other outlets. The opinions expressed are his own. Follow him @mattlamb22 on X.
Read the full article here






Leave a Reply