ad
ad
Topview AI logo

Ai Detectors are Bunk!

Education


Introduction

In recent discussions surrounding the integration of AI technologies in educational settings, a frequent proposal has been the utilization of AI detectors to address the issue of AI-generated plagiarism. As a professor deeply engaged in the intersection of technology and its societal implications, I must contend that these AI detectors are fundamentally flawed and should not be employed in academic contexts.

The Flaws of AI Detection Tools

While it’s possible for AI-generated images to be tagged or watermarked, the same cannot be said for AI-generated text. If text is tagged, those tags can be effortlessly removed. Consequently, several companies have developed tools claiming to detect AI-generated text by recognizing specific patterns. Initially, it seemed reasonable—OpenAI, the creators of ChatGPT, even released their AI detector in early 2023. However, the reality has become clear: these tools are rife with issues.

The primary problems with AI detectors stem from their propensity for false positives and false negatives. Unlike traditional plagiarism detection systems, which provide clear sources and direct citations, AI detection tools offer only vague probability estimates regarding the likelihood that a specific piece of text was AI-generated. This uncertainty can lead to grave consequences; students may be wrongfully accused of academic dishonesty, with faculty members potentially lacking the requisite understanding of how to utilize these tools effectively.

Moreover, much like the AI systems they aim to detect, AI detection tools themselves operate as a black box. None of the notable systems are open-sourced, obscuring our ability to fully comprehend their mechanisms, despite the presence of FAQs purporting to explain their functionality. For instance, OpenAI explicitly acknowledges in their FAQ that AI content detectors have yet to "proven to reliably distinguish between AI-generated and human-generated content."

Despite some educators advocating for these tools on various platforms, it’s essential to recognize that the pace of AI development is extraordinarily rapid. The systems that may have functioned adequately in one instance are unlikely to maintain that efficacy as technology evolves. Current AI generation tools like ChatGPT and MidJourney are at their most basic; they will only improve over time. Additionally, many techniques exist for generating AI text that evades detection, with numerous resources sharing prompts and methods for manipulating AI in ways that may bypass detectors entirely.

This ongoing “AI detection arms race” places institutional goals—focusing on large classes with easily graded assignments—against student goals: navigating mundane tasks efficiently to advance their educational journeys. This dynamic is unsustainable and must be addressed.

Rethinking Assignments: A Lesson from Epistemology

To remedy this situation, we must take a moment to explore epistemology—the study of knowledge itself. There are two types of knowledge: a priori and a posteriori. A priori knowledge exists independently of experience, meaning that, as educators, we typically formulate expectations for an essay prior to seeing any student work. Unfortunately, these are precisely the kinds of assignments that AI can produce quickly and effectively.

In contrast, a posteriori knowledge is contingent on experience and derived from observations or experiments. For example, if we ask students to write about the impact of World War One on their town or family, neither we nor the AI can predict the outcome. These types of assignments require genuine student engagement and research, resulting in essays that are more authentic and potentially more compelling.

While students could certainly use AI to assist in brainstorming or editing, actual engagement in the writing process is essential. Assigning this type of essay is beneficial not just to minimize the influence of AI but also to make the assignment more enriching for the student.

Conclusion: Moving Forward

Transitioning to this approach will undoubtedly present challenges for educators, including restructuring assignments and grappling with the intricacies of grading. Nevertheless, this type of pedagogical evolution is critical to addressing the issue of AI-generated work without resorting to untrustworthy detectors.

We must put an end to this misguided AI detection arms race; the technology simply cannot be trusted.

Thank you for reading, and if you found this discussion helpful, please consider subscribing.


Keywords

  • AI detection
  • Plagiarism
  • A priori knowledge
  • A posteriori knowledge
  • Educational integrity
  • Academic honesty
  • Generative AI
  • Assignment design

FAQ

What are AI detection tools?
AI detection tools are software programs designed to identify text generated by artificial intelligence, often used to detect potential plagiarism.

Why are AI detectors not reliable?
AI detectors often produce false positives and negatives, lack transparency, and cannot definitively determine whether content is AI-generated or human-written.

What are a priori and a posteriori knowledge?
A priori knowledge is independent of experience, while a posteriori knowledge relies on experience and observation; the distinction has implications for how assignments are structured.

How can educators create assignments that minimize the use of AI?
Educators can design assignments that require personal reflections, unique observations, or experiences that AI cannot replicate, thereby encouraging authentic student engagement.

What should educators do instead of using AI detectors?
Instead of relying on untrustworthy AI detection tools, educators should focus on creating more meaningful assignments that inspire genuine learning and cannot be easily automated by AI systems.