Hello

Your subscription is almost coming to an end. Don’t miss out on the great content on Nation.Africa

Ready to continue your informative journey with us?

Hello

Your premium access has ended, but the best of Nation.Africa is still within reach. Renew now to unlock exclusive stories and in-depth features.

Reclaim your full access. Click below to renew.

AI
Caption for the landscape image:

A new headache for honest students: Proving they didn’t use AI

Scroll down to read the article

As Artificial Intelligence (AI) creeps deeper into academic life, some students are finding themselves under scrutiny.

Photo credit: Shutterstock

It used to be that if you wrote your own assignment, you were in the clear. Not anymore. As Artificial Intelligence (AI) creeps deeper into academic life, some students are finding themselves under scrutiny; not because they cheated, but because their work might look like they did. Now, even original essays are being questioned, and students are having to defend work they actually did.

Across universities, lecturers are using AI-detection tools to flag suspicious writing. But these tools aren’t foolproof, and that’s where the problem begins. Students who genuinely put in the effort are being told their tone, structure or grammar seems “too perfect” or “too AI-like.”

Ian Thuku, a 20-year-old student at KCA University, was recently accused of using AI to complete an assignment for a Graphic Design unit. The task, he explained, involved two parts — a write-up worth 10 marks and a design component worth 40 marks. “The lecturer accused me of using AI for the write-up and threatened to either give me a zero or force me to retake the unit.”

Ian Thuku

Ian Thuku, a student KCA university, during an interview at Nation Centre in Nairobi on July 30, 2025.

Photo credit: Bonface Bogita | Nation

Thuku denies using AI to generate the essay. “I didn’t copy or paste from ChatGPT,” he said. “I used it to find articles, online and published journals, then read those and wrote the essay myself based on what I understood and could explain in my own words.”

“The lecturer thought the abbreviations he used looked like AI output. He told me, ‘This is AI. If you get a job in future, it will be because of AI’.”

To prove his innocence, Thuku said the lecturer ran his paper through GPTZero, an online AI-detection tool. “It came out as zero percent AI. That’s when he finally said okay. But now I find myself worrying that my human-written work might get flagged. It’s annoying, but it’s part of the deal in 2025.”

Thuku said he wasn’t alone. “In that class, about 10 other students went through the same thing. One of them had their AI score come out at 100 percent, so he was told to repeat (the assignment).”

Takeaway assignment

Elsewhere, Daniel Chacha, 20, splits his academic life between two institutions. He’s a full-time student at Kenyatta University, where he studies Economics and Finance, and during the long holidays, he’s also enrolled in a CPA course at KCA University. He recalls how he and his classmates were accused of using AI to complete a 15-mark takeaway assignment.

“We were 200 students. And when the results came out, the grades varied, but still the suspicion was general. It was like the lecturer already believed none of us did it the right way.”

Daniel Chacha

Daniel Chacha, a student at Kenyatta University, during an interview at Nation Centre in Nairobi on July 30, 2025.

Photo credit: Bonface Bogita | Nation

What frustrates him most is the dismissal of the effort that goes into his work. “You wake up, spend the whole day looking for books in the library, referencing, compiling, and handwriting the whole thing... only to be suspected of using AI,” he said. “Even when you include proper referencing, even diagrams, it’s like it doesn’t count. Some lecturers just assume we all went online, copied, and pasted.”

He believes universities should invest in proper AI detection tools, and more importantly, in educating both students and lecturers on how AI works. “If a student is suspected of using AI, there should be a system to verify. Not just guesswork. And lecturers need to be trained too. Some of them don't even know how to use Turnitin properly.”

Victor Onyambu, a political science and public administration student at Moi University’s Nairobi campus, found himself at the centre of an academic integrity storm earlier this year. He had just submitted a coursework assignment comparing the political structures of Kenya and Swaziland, a topic that demanded both depth and specificity.

“She sent me an email accusing me of using AI to write my assignment,” Victor recalls. “Then she asked me to go see her in her office.”

Victor says he had invested a great deal of time digging through physical books, academic journals and a variety of online sources before writing the four-page essay.  “She said the phrasing of the introduction, body, and conclusion looked like an AI font,” he said. “And she also noted that my grammar was suddenly perfect, unlike my previous assignments, which had errors. That, to her, was a red flag.”

What followed was a stressful process of proving his innocence. “It was difficult because I hadn’t used AI at all, but how do you prove that?” he asked. “I submitted my handwritten sticky notes, some drafts with grammatical errors, and even pictures of me in the library holding books, with the date visible. I was really trying to show that this was my original work.”

In addition to the photos, Victor sent his lecturer images of highlighted books he had referenced, along with links to online articles he used. He had presumed that would be enough. “But she wasn’t convinced,” he said. “She had already run the work through an AI detector and said it came out as AI-generated. She didn’t tell me which tool she used.”

Victor is not alone. Several of his classmates have also faced similar accusations. “One of my friends was accused of using AI in a philosophy and religion assignment,” he said. “It was very frustrating for him too. Being accused of something you didn’t do—it’s hectic.”

Copies and pastes

When asked what universities should do differently, Victor didn’t hesitate. “They need to stop relying on unreliable AI detection tools,” he said. “Instead, they should assign topics that are harder for AI to tackle—like emerging issues or subjects that require personal reflection. That way, you get more authentic student responses.”

He says many lecturers just don’t like AI. “And to some extent, I understand, it can make students lazy. But AI itself isn’t bad. What’s wrong is when someone copies and pastes without thinking. That’s what the lecturers don’t want.”

Dr James Mwita, a lecturer at Rongo University, says he has encountered assignments that strongly suggest they were generated by AI, and over time, he has learned to identify specific tells.

“There are things that might alert you,” he explains. “You give students an assignment, let’s say on entrepreneurship among students, and you start seeing examples that are completely out of the local context, American or Western examples in a setting where you expect Kenyan realities.”

Another major red flag, he says, is template-like phrasing. “You’ll find placeholders like ‘insert the name of the person’ or ‘insert the name of the business,’ which clearly shows AI was involved but the student forgot to personalise it. That’s common.”

Certain keywords also give it away. “Words like ‘navigate’ or ‘delve into’, they show up a lot in AI-generated content,” Dr Mwita said. “You begin noticing a pattern. Then there’s the suspicious polish. A full assignment with not even a single grammatical error? That raises eyebrows, especially when you know the student’s actual writing level.”

He teaches large classes, sometimes up to 600 students per semester, and says the increase in AI use has become hard to ignore. “This phenomenon has become rampant in the last two years. Students are even getting smarter; some use AI to generate content and then humanise it with localised examples. So it becomes hard to pinpoint unless you really dig into it.”

His method of verifying authenticity is, by his own admission, informal. “If you want to test this, give a student a take-home assignment and then include a similar question in a sit-in CAT or exam. You’ll quickly notice the difference. The one done in class will rarely match the quality of the take-home.”

Don’t ban it

Despite these concerns, Dr Mwita doesn’t believe in banning AI outright. “These tools are becoming more common and we can't reverse that trend. I’m in the school of thought that believes students should be taught how to use AI tools, but responsibly. They need to be honest about the extent to which they used it.”

He points to the wider academic resistance, particularly when it comes to postgraduate work. “For Master’s and PhD theses, it’s expected that your AI usage should be zero percent. But even that’s a flawed expectation. The articles we read are suggested to us through algorithms, that’s AI. We’re already relying on it.”

Although some lecturers use Turnitin, Dr Mwita says AI detection is fundamentally harder than detecting plagiarism. “With plagiarism, you just run a phrase online and see what comes up. With AI, it’s a different beast. Unless it’s poorly localised or filled with keywords, it’s hard to catch.”

And the consequences of this suspicion are tricky. “It creates a dilemma. You suspect a student, you mark over 600 scripts from home, and when you want to confront them in class, they’re not even there. So what do you do? You just grade the work.”


You may also read other AI In Our Lives story series below.