What AI Detector Does Turnitin Use? Updated for GPT-5 Era
Summary
Turnitin’s accuracy was strong against older GPT-3/3.5 models, but GPT-5’s anti-detection training has reduced reliability, particularly for mixed or heavily edited AI content. Independent studies report high accuracy for fully AI-generated text but much lower accuracy (≈43%) for hybrid drafts. To reduce false positives, Turnitin hides low-confidence scores (1–19%) using an asterisk.
The article highlights key limitations, common misinterpretations, and practical guidance for students and educators. It stresses transparent AI use, staged drafting, human review, and updated academic policies as essential companions to imperfect AI-detection systems.
This article is for educational purposes only. We strongly encourage all students to follow their institution's academic integrity policies and use AI tools transparently and ethically.

As more and more people use Turnitin to check for AI-generated content, people may wonder, What AI detector does Turnitin use? Turnitin uses a proprietary, LLM-based classifier built specifically to flag AI-generated writing in long-form prose submissions. The AI indicator is separate from the similarity score and appears in both the classic and the enhanced Similarity Report interfaces. This AI detector analyzes linguistic patterns and statistical markers rather than searching for copied content like traditional plagiarism detection.
As models advance, mixed or heavily paraphrased text is harder to classify confidently. Turnitin has adjusted its UI and guidance accordingly (see the asterisk for low scores and the minimum text requirements). Avoid over-interpreting small percentages. While students anxiously wait for their Turnitin report to see if their paper is flagged, educators are seeking ways to crack down on AI-generated contents. In this article, we'll explain the mechanism of Turnitin's AI detection technology, test its accuracy against the latest AI writing tools, discuss its limitations, and provide practical guidance for both students and educators.
How Turnitin's AI Detector Works
Turnitin's AI detector uses a proprietary large language model based classifier originally trained to detect text generated by AI tools like ChatGPT (based on GPT-3 and GPT-3.5 models), but has since been updated to address more advanced models like GPT-4 and GPT-5. Unlike the classic plagiarism similarity score that searches for copied text, this detector analyzes sentence-level language patterns, such as repetition, entropy, and "burstiness"—how varied or predictable word sequences are.
When a student's paper is submitted to the Turnitin tool, the text is dissected into chunks, typically overlapping sentences or writing passages, where each chunk receives a score ranging from 0 to 1, with 1 being a sentence that is most likely AI-generated and 0 being likely human-written. The system aggregates these chunk scores to provide an overall "AI score," reflecting how much content likely came from AI tools.
The detector only works on what we call "qualifying text", which are sentences that aren't in tables or lists and not less than 300 words. This is because the text must have enough context for the detector to work with. However, the technology is reaching new limitations in identifying the heavily-paraphrased or mixed human-AI writing because the newer models, such as GPT-5, show more nuances and fewer AI-matching traces.
Claimed Accuracy and Core Metrics in the GPT-5 Era
Turnitin originally claimed its AI detection could accurately spot AI-produced content from GPT-3 and GPT-3.5 sources up to 98% of the time. They also said it had a less-than-1% false positive rate in identifying something as AI when the paper had an AI score of more than 20%. That all changed with GPT-5's release in February 2025, which has been billed as having greater creativity and hallucinating less.
Notably, GPT-5 incorporates "anti-detection training" features, enabling the generation of text better cloaked against traditional AI detectors. Recent independent research suggests that AI text from these advanced models can increase mainstream tool false negative rates substantially, with misclassification rates as high as 43% reported for sophisticated detection evasion attempts. These statistics highlight the importance of using AI detection as one of multiple assessment tools, not as the sole determinant of academic integrity.
Since April 2023, over 130 million papers have been scanned using this AI detection tool, with approximately 3.5 million flagged as highly likely AI-generated. To avoid confusion about what AI detector does Turnitin use, the detection system intentionally misses some AI content (estimated up to 15%) to reduce false positives. AI scores between 1% and 19% are redacted with an asterisk on reports to prevent confusion or misinterpretation.
This is the balance Turnitin seeks to maintain over time: not catching every phrase of AI-generated content, but raising a flag when use of AI writing has risen to a level that needs review, even as AI capabilities continue to change.
Independent Testing & Real-World Performance
Independent tests of Turnitin's detector are beginning to reflect these new realities. For example, BestColleges reported "remarkable accuracy" in catching fully AI-generated essays from older models in a test about the detector of Turnitin, while also noting the increased difficulty in catching hybrid mixed human and AI texts from more advanced models.
A Temple University study found Turnitin correctly distinguished 93% of human writing and 77% of 100% AI-generated content but dropped to around 43% reliability on mixed drafts — resulting in an overall error rate of about 14%.
Students are experiencing inaccurate AI scores on their own work, which could undermine trust in AI detectors and highlights the importance of human review along with AI scores in academic institutions. These cases may become more common as open-source and commercial AI becomes more sophisticated and able to avoid detection. These instances have sparked a wider debate on the ethics of AI detection in academia, as the risk of false accusations continues to impact the relationship between educators and students.
Limitations and Considerations in the GPT-5 Era
While Turnitin's AI detector has maintained its high performance and broad coverage, the widespread active anti-detection and paraphrasing capabilities of GPT-5 era models have introduced new detection limitations. The detector is now less likely to flag an already paraphrased or heavily edited piece of AI writing. Various text modification techniques exist, but students should focus on transparent and ethical writing practices.
The AI indicator continues to suppress low scores (1%–19%) and display an asterisk instead of a number. In case of misunderstanding or over-weighting the importance of a small number, some institutions also have chosen to disable the AI detector or restrict access to it due to licensing cost, efficacy, or policy preferences.
Educators and students should be aware that no AI detector, including Turnitin's, is foolproof—especially against newer, more sophisticated AI models. Careful human review and clear academic policies are more essential than ever.
Practical Tips for Students
To maintain integrity amid advanced AI generation and avoid triggering Turnitin's AI writing indicator unnecessarily, students should:
● Draft in stages and save earlier versions or outlines to demonstrate authorship and writing development process.
● Cite AI assistance transparently if permitted by the institution's academic integrity policy, especially given the sophistication of newer AI tools.
● Use reputable AI detectors personally to pre-check work but treat results critically; Turnitin remains the authoritative system despite its evolving challenges.
● Focus on original ideas and personal voice, which remain harder for AI to replicate authentically, even with advanced models.
● Avoid overreliance on AI-generated content to maintain writing skills and trustworthiness in an era of increasingly capable AI tools.
● When AI assistance is permitted by your institution, ensure all use is disclosed transparently according to academic policy requirements.
By doing so, students can confidently write and maintain academic integrity in today's changing education landscape in the age of AI.
Guidance for Educators
Educators can continue to use Turnitin's AI detector to best advantage in the GPT-5 era by:
● Combining its scores with rubric-based grading and oral questioning to verify student understanding and authorship, given the tool's increased limitations;
● Giving clear, updated policies about acceptable AI use with example Turnitin AI detection reports, so students understand how the technology works and its limitations;
● Documenting false positives to improve their department's guidelines so as not to unfairly penalise those who fall victim to it as it becomes less precise;
● Initiating discussions with students about the importance of academic honesty and ethical use of AI in academic work, given its rapid changes.
This balanced approach ensures students are fairly assessed without over-relying on a partially-automated system facing new obstacles.
Conclusion
Turnitin's AI detector is a robust but evolving measure in academic integrity, employing sophisticated language models to identify AI-generated content and continue to stay ahead of the rising complexity of GPT-5 and its anti-detect capabilities. It is, however, an imperfect tool that requires greater educator judgment and open communication with students as AI technology progresses. Understanding what AI detector does Turnitin uses and how it operates in this generation enables students and teachers to work hand-in-hand toward maintaining authentic learning and honest grading in this modern day of increasingly sophisticated AI-assisted writing.
Frequently asked questions
Q1. Why does my Turnitin AI show % and not a number?
Turnitin shows confidence levels, and 1–19% is considered a low-confidence indication. To reduce the likelihood of misreads, no number is shown in either report view for low levels.% is shown instead of a numerical value.
Q2. What submissions can get an AI percentage?
Only AI-detectable, qualifying, word-based, long-form, continuous prose text submissions of 300–30,000 words in supported languages (.docx, .pdf, .txt, or .rtf files) will get a number. Short/non-prose= no number.
Q3. How accurate is it, can I rely on small scores?
Based on Turnitin’s own reporting, expect roughly 4% of human-written sentences to be false positives. There are more false positives at low document percentages (thus the *%). Small indications should be treated as prompts for further review, not proof or disproof of AI authorship.
Q4. Why do some universities limit AI detectors?
Some are worried about how reliable and fair the detectors are and if they may contain bias; and some recommend having a discussion first before giving a penalty of some kind solely based on the detector.
Q5. Does the detector work on code, tables, or bullet lists?
This AI detector only works for long articles or essays. It does not work for lists, tables, or code, so there is no AI percentage shown.
Related Articles

Why Formulaic Academic Writing Triggers AI Detectors: A Stylistic Analysis
Why does your original essay look like AI? We analyze how IMRaD structures and low entropy in academ...

Turnitin’s AI Writing Indicator Explained: What Students and Educators Need to Know in 2026
Confused by your similarity score? We explain how Turnitin’s AI writing indicator actually works in ...

Student Data Privacy: What Happens to Your Papers After AI Screening?
Wondering where your essay goes after you hit submit? We uncover how AI detectors store student data...

How AI Detectors Impact Non-Native English Scholars (ESL Focus)
Are AI detectors biased against ESL scholars? We analyze the 2026 impact of false positives on non-n...
