When ChatGPT was introduced to the public in November 2022, it sent shockwaves through the higher-education sector. Fears abounded about students using AI to write their college essays, which would diminish the importance of one of the few personalized components of an application. Without a unique essay, would schools lose a critical metric for assessing student authenticity and individuality? How could a school trust an application with an essay potentially written by an algorithm? The future of college admissions seemed imperiled.
Finding a new normal
The reality today is quite different from the bleak one anticipated not that long ago. Sure, students are using AI to write their college essays—according to a recent survey, 1 in 3 college applicants used AI for essay help. Over 40 percent of respondents to a Duolingo English Test survey of university administrators conducted earlier this year said they thought AI tools were being used in applications by both students and recommenders, based on what they had read in the past year.
But the potential chaos that ChatGPT might wreak on college applications has largely failed to materialize, at least to the degree anticipated. What has surfaced, instead, is a more contemplative view of the role AI should play.
The reasons for this are multifold.
Higher education standards are evolving
First, while the sector does not have an umbrella policy regarding appropriate AI usage, universities have taken steps to help define the horizon within which students–and admissions teams–are currently operating. For example, while the University of Kansas utilizes plagiarism detectors—and even developed their own tool—they suggest using results as “information, not judge and jury.” The school advises speaking with students should a detector flag a high number of problematic areas and to avoid determining hard and fast rules in terms of when a paper can be outright rejected. “Don’t just set (the detector) and go,” the university advises.
Other universities have provided guidelines on how to contextualize the use of plagiarism-detection software. For example, The University of Minnesota suggests introducing text-matching software such as Turnitin as a tool for text analysis rather than a “plagiarism checker.” Often, the cause or motive for academic dishonesty may remain elusive, the university notes on its website, adding, “Because of this, it is most useful to approach the use of originality checkers or plagiarism detection tools from a pedagogical perspective rather than a surveillant or punitive one.”
Indeed, despite the uptake of gen-AI detectors, a wholesale policing of college applications and essays through plagiarism software accompanied with penalties–which would have created a chilling effect on the college admissions process–hasn’t occurred, at least not publicly. Fears of false positive results and the implications for both student well-being and legal liability have tamped down initial interest. In fact, OpenAI–the company that had launched ChatGPT–shuttered its own “classifier” trained to distinguish between AI-written and human-written text due to a low rate of accuracy.
In addition, according to the DET stakeholder survey, some admissions offices also don’t have enough funding or staff to properly use these tools in any consistent manner.
There’s a need for clarity and regulation
Concerns about generative AI’s impact on college essays has also been mitigated by the greater context regarding AI and higher education overall. According to a recent Digital Education Council study, 86 percent of surveyed students were unclear about AI guidelines at their universities even though the same percentage regularly used AI in their studies.
The survey, which garnered 3,839 responses across 16 countries from students across a wide variety of post-secondary study in multiple fields, also found that ChatGPT was the most widely used AI tool, with 66 percent of students using it. These results illuminate the need for more nuanced discussions about appropriate use of AI tools in particular subject areas.
For example, what might be considered appropriate for producing content in an engineering class might be deemed excessive in, say, an English class, where a student is assessed more directly on writing skills. A sampling of various university policies aggregated by Northern Illinois University regarding how AI should be utilized when brainstorming, drafting outlines and creating videos, amongst other activities, indicates the breadth of considerations students and professors must weigh when making decisions on AI tools.
Students are putting AI to good use
Two Stanford students recently created an AI tool to help students improve their college essays by analyzing around 500 essays from successful applicants to top schools like Harvard, MIT, and Stanford. The tool, called Esslo, offers personalized suggestions, such as avoiding clichés and adding detail, along with scores for writing, voice, and character. There’s a free version with one round of edits and a paid version with unlimited edits, with every paid sale providing free access to a Title I high school student.
The tool won’t write essays but helps refine them, offering feedback similar to what a teacher or counselor might give. Using AI to provide a free essay editing service to a broad swath of students is a good example of what technology can do in terms of democratizing access.
Like universities themselves, students, too, are showing signs of self-regulation. Certainly, worry about being penalized for misuse prevails; but some students are more concerned with issues such as academic integrity. In addition, overreliance on AI tools seems to negate the very reasons why students pursue higher education. According to a recent Duolingo study on Gen Z behavior, students said they were prudent in their AI use, not only because of fears of punitive measures but because they genuinely want to learn. Interestingly, only 24 percent of the students surveyed by the Digital Education Council said they used AI to create a first draft.
Despite concerns about overuse, the Gen Z students interviewed by Duolingo were also aware of the need to be comfortable with AI tools, as they expect their future careers will certainly require AI integration to some degree. And while they believe AI will replace some jobs currently held by humans, some students forecast that the technology will enhance how STEM, creative and business and legal professionals work, rather than simply eliminate positions.
A path to democratizing access
In a rapidly changing educational landscape, student access to AI tools like ChatGPT has underscored the need for nuanced regulation and guidance rather than blanket restrictions. The initial fears that AI would erode authenticity in college applications have largely been replaced by acknowledgement that this technology, when used thoughtfully, can serve as a tool for learning rather than be dismissed as a shortcut to circumvent academic or intellectual rigor. By adopting measured approaches, universities are demonstrating an openness to innovation while remaining committed to the values of higher education.
Looking ahead, the integration of AI into the college application process and higher education overall could foster a more equitable environment by broadening access to resources. Beyond text generation, AI's potential to democratize educational support to students from all backgrounds should increasingly become a focus. As universities and students continue to adapt, AI may ultimately prove to be a catalyst not for bypassing hard work but for creating new avenues for self-expression and academic growth.