Tina Austin is an AI-in-education strategist and creator of the UnBlooms™ Framework, used internationally to support authentic student thinking in the age of AI. She has taught at UCLA, USC, CSU, and Caltech, and her faculty workshops on AI adoption have reached institutions across North America, Europe, Australia, and Asia. Tina advises the California Department of Education AI taskforce, the Los Angeles AI Taskforce, and the Marconi Society’s AI Institute, and her AI framework was recently featured by OpenAI and Microsoft.
Generative AI has reshaped how students write, think, and express themselves—especially in college applications. Around the world, admissions officers are asking a new question: in an age when AI can write so well, how can we recognize authentic student voices?
As an educator and researcher, I believe the answer isn’t stricter detection tools, but rather how are we rethinking how to teach, guide, and evaluate student writing to prepare them for the future?
How to find authentic student voices in the age of AI
One of the most challenging aspects of this new era is that AI often produces what appears to be "better" writing than students naturally create—at least by conventional standards. AI text is typically balanced and measured in tone, well-structured with clear topic sentences and transitions, free of redundancy or awkward phrasing, and grammatically flawless.
But real 17-year-olds rarely write with such polish. They circle back on ideas, contradict themselves, use fragments for emphasis, and occasionally lose the thread. Their metaphors might be strained, their vocabulary sometimes reaches beyond their grasp, and their humor lands imperfectly—but often better than AI. So the very qualities that make writing sound “good” can also signal inauthenticity.
As a workaround, some institutions have begun turning to video essays, hoping that capturing students on camera will reveal authentic voices in ways text cannot. The reasoning is that it's harder to hide behind AI-generated language when you're speaking in real time, expressing yourself visually, letting your personality come through.
Yet that advantage may be short-lived. Advanced video generation tools are developing rapidly, making it increasingly possible to produce convincing video content without genuine student participation. So what seemed like a solution to the authenticity problem is quickly becoming just another frontier in the ongoing cat-and-mouse game between detection and deception.
There's no technical fix to a fundamentally human problem. No single format or tool will definitively separate the authentic student voices from the fabricated. It’s worth being clear: AI assistance itself is not inherently harmful, nor do we have evidence that responsible AI assistance diminishes students’ capacity to think (Austin, 2025, July). The real work happens in building the relationships between advisors and students, and between admission officers and applicants.
As one admissions director I spoke to put it, “We are not looking for essays untouched by AI; we are looking for evidence of self-knowledge.” I agree wholeheartedly. The question we must ask our students is not "Did you use AI?”; it’s "Is this your thinking, expressed in your voice, about your actual life?"
Students who can answer that question are already thinking like the adults we want them to become. They're distinguishing between getting help and abdicating responsibility. They're engaging with their own stories. They're learning to communicate what's true about themselves. Those are the applications that matter, not because they're perfectly written, but because they're genuinely revealing.
Why AI challenges our sense of student authenticity
Before we can expect students to use AI responsibly, we must teach them to recognize when they're thinking versus when the AI is thinking for them. Our goal as educators is to restore trust in how students express their voices while making authenticity, reflection, and ethical use of AI part of how we assess their work.
Drawing on my own research and frameworks I piloted in my classes over the past three years, I've developed two complementary tools specifically for this challenge: The UnBlooms™ Framework, a pedagogical architecture for AI-mediated learning (Austin 2025); and The UnBlooms™ Critical Evaluation Scale, a rubric to help students recognize how their reasoning differs from AI output and assess the quality of their own thinking (Austin 2025).
Introducing the UnBlooms™ Framework
The UnBlooms Framework is a non-hierarchical, recursive model suited for the AI age. Unlike traditional taxonomies such as Bloom’s that assume knowledge builds toward creation, it centers reflection as the anchor: students move fluidly between Create, Evaluate, Analyze, Apply, and Understand based on context.

Crucially, this framework guides educators on not just how to use AI, but when it is pedagogically appropriate to do so, providing flexible entry points that adapt to different learners and disciplines.
Using the UnBlooms™ Critical Evaluation Scale
Students trained within the UnBlooms framework will naturally engage in the type of thinking that makes authentic authorship visible. To support this, I developed the UnBlooms Critical Evaluation Scale that students can use to evaluate their own college essay writing in order to:
- Recognize how their voice is different from AI (Level 1-2)
- Understand why AI can't capture their specific experience (Level 3)
- Evaluate what's missing from AI's version and why does that matter (Level 4)
- Create something that transcends what AI could produce (Level 5)
Together, the Framework and Scale teach students to recognize the difference between authentic cognitive work—using human reasoning strategies i.e. "I'm stuck on why this moment mattered, so I'm going to freewrite about it, talk to someone, or revisit the memory"—and cognitive outsourcing—delegating the core thinking to machines, eg "I'm stuck on why this moment mattered, so I'll ask AI to explain why it might have been meaningful.”
The key insight: Students who reach Level 5 on the UnBlooms Scale don't need to avoid AI; they've learned to think beyond it. Their essays show genuine cognitive ownership because they've practiced distinguishing their reasoning from machine output.
How to evaluate student work in the AI era
Scientific journals such as Nature (Nature Portfolio, 2025) have faced the exact challenge now confronting college admissions: how to maintain integrity when AI can generate professional-quality content. Their solution isn't detection, but rather mandatory disclosure with clear boundaries on what constitutes authentic authorship. Leading journals now require:
- Detailed disclosure: Authors must state what AI tool was used, how it was used, and why (grammar checking, data visualization, literature review, code generation)
- Clear placement: Disclosures appear in Methods sections or Author Contributions statements
- Authorship boundaries: AI cannot be listed as an author because authorship implies accountability, which machines cannot assume
Similarly, students should disclose AI use in their application essays: whether they used AI to check for grammar, organize ideas into an outline, or generate examples to guide their composing. This transparency reveals where the thinking happened. Just as scientific journals recognize that using AI for grammar is different from using AI for hypothesis generation, admissions can distinguish whether a student’s use of AI is:
- Acceptable: Grammar checking, citation formatting, translation assistance
- Problematic: Topic brainstorming, insight generation, narrative structure, "getting unstuck" on meaning
- Disqualifying: Prompt-to-essay generation, even if the student "edited" it afterward
The Key Principle: Like journal authorship, if an AI tool generated the core intellectual contribution—the "what matters to you" and "why"—then the work lacks authentic voice, regardless of whether the student can explain it afterward. And this is not just theory. MIT Sloan’s Teaching & Learning Technologies group recently concluded that ‘AI detectors don’t work,’ citing high error rates and disproportionate false positives for multilingual writers.
My UnBlooms framework isn't about detecting AI use—disclosure handles that. My goal is teaching students to think critically about their own cognitive processes, so they can distinguish between getting help and outsourcing their thinking. Instead of hunting for AI use, we ask: Does this essay show signs of metacognitive awareness? Does the writer reflect on how they thought through a problem? Are there moments where they question their assumptions? Do they critique or contrast perspectives? Does the essay feel like an over polished shell, or does it have authentic cognitive messiness?
Rethinking our roles as educators, counselors, and admission officers
Working within these frameworks (such as Unblooms), counselors can help students design authentic writing experiences that integrate ethical reflection on AI use—with pre-writing reasoning checkpoints, mid-task critique moments, and post-draft synthesis that demonstrates genuine thinking.
For counselors, the shift requires moving from corrective editing to reflective inquiry. Rather than refining prose, prompt students to excavate meaning by asking generative questions that encourage reflection (“What moment sparked your interest in this field?”); requesting concrete examples (“Describe one instance when helping others changed your perspective”); and clarifying ethical boundaries around tool use (distinguishing between mechanical support and conceptual outsourcing).
This reframing centers student agency and positions counselors not as ghostwriters, but as partners in metacognitive development.
And this process works for admission officers and application readers too. Instead of essay-first, I read in this order:
- Activities list and transcript - What has this student actually done?
- Short answers - What's their natural voice?
- Recommendations - How do others describe them?
- Essay - Does this fit with everything else?
This way, I’m not judging the essay in isolation. I'm asking: does this essay cohere with the student I'm seeing everywhere else? If yes, it's probably authentic—even if it's polished. If not, something's off.
Moving beyond detection, towards integration
AI will continue to evolve, blurring distinctions between human and machine authorship. The integrity of admissions depends not on perfect detection but on ethical interpretation. Some colleagues even go further and suggest the essay-based application itself is the outdated format. Rather than trying to verify essay authenticity, admissions should move toward project-based assessment, portfolios of work over time, or documented problem-solving processes.
Another idea is to create opportunities for applicants to demonstrate ongoing intellectual curiosity through interviews, supplemental questions that require lived experience, or documentation of their learning process.
I am hopeful that preserving authentic student voice remains possible in the age of AI, but only if institutions choose to value process over performance.
References
Austin, T. (2025, October 20). Teaching with AI across disciplines: Reframing how we learn — The UnBlooms™ Framework. Presented at the OpenAI Inaugural Higher Education Summit, October 20, 2025. Zenodo. https://doi.org/10.5281/zenodo.17411791
Austin, T. (2025, July). Brain rot isn’t real, but cognitive outsourcing is. Nick Potkalitsky Substack (newsletter essay).https://nickpotkalitsky.substack.com/p/brain-rot-isnt-real-but-cognitive
MIT Sloan Teaching & Learning Technologies. (n.d.). AI detectors don’t work. Here’s what to do instead. MIT Sloan School of Management. https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-work/
Nature Portfolio. (2025). Artificial intelligence (AI) editorial policies. Springer Nature. https://www.nature.com/nature-portfolio/editorial-policies/ai