Reading view
Blue Jays Set to Be 'Ultra Aggressive' at Trade Dealine, Per Report
Guardians $12 Million First Baseman Could be 'Nice Fit' for Red Sox
Pirates' $106M All-Star Shuns Yankees, 'Would Not Approve' Trade to Bronx, Per Report
Texas Flooding Map, Update: Thousands Face New Flash Flood Risk
Cubs 'Accelerating' Plans to Find New Starting Pitcher After Latest Injury
Green Card Holders Impacted by Trump's One Big Beautiful Bill
I'm a university lecturer concerned that students are using AI to cheat. It's made my workload skyrocket, and I've had to make drastic changes.
Courtesy of Risa Morimoto
- Risa Morimoto has been a lecturer for 18 years. In that time, she's always seen students cheat.
- But Morimoto said AI tools have made it harder to detect cheating, increasing her workload.
- Next year, Morimoto plans to introduce new assessment methods to address her AI concerns.
This as-told-to essay is based on a transcribed conversation with Risa Morimoto, a senior lecturer in economics at SOAS University of London, in England. The following has been edited for length and clarity.
Students always cheat.
I've been a lecturer for 18 years, and I've dealt with cheating throughout that time, but with AI tools becoming widely available in recent years, I've experienced a significant change.
There are definitely positive aspects to AI. It's much easier to get access to information and students can use these tools to improve their writing, spelling, and grammar, so there are fewer badly written essays.
However, I believe some of my students have been using AI to generate essay content that pulls information from the internet, instead of using material from my classes to complete their assignments.
AI is supposed to help us work efficiently, but my workload has skyrocketed because of it. I have to spend lots of time figuring out whether the work students are handing in was really written by them.
I've decided to take dramatic action, changing the way I assess students to encourage them to be more creative and rely less on AI. The world is changing, so universities can't stand still.
Cheating has become harder to detect because of AI
I've worked at SOAS University of London since 2012. My teaching focus is ecological economics.
Initially, my teaching style was exam-based, but I found that students were anxious about one-off exams, and their results wouldn't always correspond to their performance.
I eventually pivoted to a focus on essays. Students chose their topic and consolidated theories into an essay. It worked well βΒ until AI came along.
Cheating used to be easier to spot. I'd maybe catch one or two students cheating by copying huge chunks of text from internet sources, leading to a plagiarism case. Even two or three years ago, detecting inappropriate AI use was easier due to signs like robotic writing styles.
Now, with more sophisticated AI technologies, it's harder to detect, and I believe the scale of cheating has increased.
I'll read 100 essays and some of them will be very similar using identical case examples, that I've never taught.
These examples are typically referenced on the internet, which makes me think the students are using an AI tool that is incorporating them. Some of the essays will cite 20 pieces of literature, but not a single one will be something from the reading list I set.
While students can use examples from internet sources in their work, I'm concerned that some students have just used AI to generate the essay content without reading or engaging with the original source.
I started using AI detection tools to assess work, but I'm aware this technology has limitations.
AI tools are easy to access for students who feel pressured by the amount of work they have to do. University fees are increasing, and a lot of students work part-time jobs, so it makes sense to me that they want to use these tools to complete work more quickly.
There's no obvious way to judge misconduct
During the first lecture of my module, I'll tell students they can use AI to check grammar or summarize the literature to better understand it, but they can't use it to generate responses to their assignments.
SOAS has guidance for AI use among students, which sets similar principles about not using AI to generate essays.
Over the past year, I've sat on an academic misconduct panel at the university, dealing with students who've been flagged for inappropriate AI use across departments.
I've seen students refer to these guidelines and say that they only used AI to support their learning and not to write their responses.
It can be hard to make decisions because you can't be 100% sure from reading the essay whether it's AI-generated or not. It's also hard to draw a line between cheating and using AI to support learning.
Next year, I'm going to dramatically change my assignment format
My colleagues and I speak about the negative and positive aspects of AI, and we're aware that we still have a lot to learn about the technology ourselves.
The university is encouraging lecturers to change their teaching and assessment practices. At the department level, we often discuss how to improve things.
I send my two young children to a school with an alternative, progressive education system, rather than a mainstream British state school. Seeing how my kids are educated has inspired me to try two alternative assessment methods this coming academic year. I had to go through a formal process with the university to get them approved.
I'll ask my students to choose a topic and produce a summary of what they learned in the class about it. Second, they'll create a blog, so they can translate what they've understood of the highly technical terms into a more communicable format.
My aim is to make sure the assignments are directly tied to what we've learned in class and make assessments more personal and creative.
The old assessment model, which involves memorizing facts and regurgitating them in exams, isn't useful anymore. ChatGPT can easily give you a beautiful summary of information like this. Instead, educators need to help students with soft skills, communication, and out-of-the-box thinking.
In a statement to BI, a SOAS spokesperson said students are guided to use AI in ways that "uphold academic integrity." They said the university encouraged students to pursue work that is harder for AI to replicate and have "robust mechanisms" in place for investigating AI misuse. "The use of AI is constantly evolving, and we are regularly reviewing and updating our policies to respond to these changes," the spokesperson added.
Do you have a story to share about AI in education? Contact this reporter at [email protected].
NASCAR: Trackhouse Racing Owner Breaks Silence on Split With Daniel Suarez
Zohran Mamdani Gets Major Polling Boost in New York Mayoral Race
Texas Floods: Celebrities Speak Out as Death Toll Rises
Lewis Hamilton Demands Fresh Start From Ferrari With 2026 F1 Car
Hundreds of Thousands Told To Avoid Sun: 'Deadliest Weather Phenomenon'
Texas Floods Kill 27 at Camp Mystic as NWS Issues New Flood Watch Warning
Red Sea Ship Attack Threatens New War With Houthis
Warning Issued Over Iranians Trying to Enter US: What to Know
Donald Trump Says He is Close to Hostage Deal With Hamas: What to Know
China Sends Military on Combat Patrol Against US Ally: Video
Tropical Storm Chantal Path Update as Flash Floods Wash Away Roads
An anti-DEI investment firm is postponing its Tesla ETF, saying Elon Musk has 'gone too far' by launching a political party
Kevin Dietsch via Getty Images
- Investment firm Azoria has delayed the launch of a Tesla ETF after Elon Musk unveiled his "America Party."
- Azoria CEO and ex-DOGE advisor James Fishback slammed Musk's move, saying he was undermining Tesla.
- Tesla investors are uneasy after Musk returned to politics despite saying he would focus on the EV giant.
Elon Musk's decision to set up a new political party is already proving a headache for Tesla.
James Fishback, CEO of investment firm Azoria, said on Sunday that the firm β which has stated its opposition to DEI targets and "woke" companies β will postpone its planned public listing of a Tesla ETF, which would invest in the EV giant's shares and options.
In a post on X, Fishback, who previously worked as an outside advisor to DOGE, said that the billionaire had "gone too far" with his latest plan to set up the "America Party" and take on both Republicans and Democrats.
Fishback, who included a letter to Tesla Chair Robyn Denholm in his X post, added that the new party "creates a conflict" with Musk's responsibilities as CEO of Tesla and "actively undermines" the company's mission.
"I encourage the Board to meet immediately and ask Elon to clarify his political ambitions and evaluate whether they are compatible with his full-time obligations to Tesla as CEO," said Fishback.
Tesla's share price was down as much as 7% premarket on Monday, as investors expressed unease over Musk's decision to dive back into politics.
Wedbush Securities analyst Dan Ives wrote in a Sunday note that investors were feeling a "sense of exhaustion" over Musk's new political party.
The longtime Tesla bear said that Musk, who told investors in April that he would step back from his role in the Trump administration to focus on the beleaguered EV maker, was going in "exactly the opposition direction" to what Tesla shareholders wanted.
Ives also warned that the billionaire's extremely public falling out with Trump could create additional hurdles for Tesla in the future.
Other investors expressed similar frustrations. "Waymo has solved autonomous driving. Meanwhile, Elon is starting a new political party," wrote Tesla investor and regular Musk critic Ross Gerber on X.
An outspoken supporter of President Donald Trump, Fishback told Business Insider in January he had served as an outside advisor to DOGE, and proposed the idea of a "DOGE dividend" earlier this year.
The investment banker, who has said that he owns Tesla stock and that the EV giant is Azoria's largest position, also accused Musk of being fixated on "sabotaging President Trump" and said the Tesla CEO was an "absolute failure" at DOGE in a series of posts on X.
Tesla did not respond to a request for comment sent outside normal working hours.