Through our annual Doctoral Dissertation Awards, the Duolingo English Test (DET) proudly recognizes and celebrates outstanding doctoral research in the field of language assessment.

This year’s winners represent a diverse range of projects, tackling critical challenges in language assessment with creativity and rigor. From exploring the use of virtual reality in oral proficiency testing to analyzing the impact of note-taking methods on listening comprehension, these researchers are breaking new ground and offering fresh insights into how language assessments can better serve learners, educators, and institutions around the world. Read on to learn more about their work!


Doctoral award winner Wiktoria Allan

Wiktoria Allan

Lancaster University

Through a focus on extended time accommodations, Wiktoria Allan’s research investigates how English language testing can better support students with ADHD. This study examines how learners with and without ADHD use and perceive additional time in reading and listening-to-write tasks, as well as whether these accommodations are helpful or harmful. The research addresses a critical gap in second-language testing research and the findings have the potential to shape policies that create fairer, more inclusive assessments, empowering neurodivergent students in high-stakes academic and professional settings.

Doctoral award winner Carla Consolini

Carla Consolini

University of Oregon

What makes a second-language Spanish essay stand out? Carla Consolini’s research analyzes linguistic features such as lexical diversity and complexity, and it aims to identify measurable predictors of high-quality writing throughout the development of learners of Spanish as a second language. By extending automated writing evaluation tools to Spanish, this work addresses a growing demand for large-scale language assessment, offering faster, more specific feedback for learners and educators alike.

Doctoral award winner Jieun Kim

Jieun Kim

University of Hawaiʻi at Mānoa

Does the way students take notes—by hand or on a keyboard—impact their listening test performance? Jieun Kim’s research explores this timely question, analyzing how note-taking modes affect test outcomes and the content of notes among second-language learners. The findings offer an opportunity for test designers to consider policies that ensure consistent test scores, accurately reflect learners’ listening abilities, and promote fairness for all.

Doctoral award winner Valeriia Koval

Valeriia Koval

University of Bremen

Assessing integrated academic writing poses unique challenges, especially when raters must distinguish between a writer’s original language and borrowed material. Valeriia’s research explores how a source-use detection tool impacts the evaluation process, examining its effects on rater perceptions, processes, and the quality of their judgments. By comparing ratings with and without the tool, the study investigates whether this technology improves fairness, consistency, and accuracy in evaluating skills like paraphrasing and source integration. The findings are expected to enhance rater training and support the use of technology for more transparent and reliable academic writing assessments.

Doctoral award winner Sebnem Kurt

Sebnem Kurt

Iowa State University

Could virtual reality (VR) be the future of language assessment? Sebnem Kurt’s study explores how VR-based testing compares to traditional methods in evaluating oral English proficiency. By addressing issues like accessibility, security, and cost, this research could revolutionize language certification, offering a flexible and reliable alternative that meets modern testing needs, including during crises like the COVID-19 pandemic.

Doctoral award winner Jennifer Kay Morris

Jennifer Kay Morris

Lancaster University

For professionals in Türkiye’s private technology sector, English proficiency is often a key to global success. Jennifer Kay Morris explores how these workplace demands can inform the design of language-for-specific-purpose (LSP) tests. Using online ethnography, this research aims to align language assessments with real-world business communication needs, offering a model that can transform how workplace language skills are evaluated globally.

Doctoral award winner Xue Nan

Xue Nan

Beijing Language and Culture University

Incorporating authentic audio from platforms like TikTok and Bilibili into Chinese language tests presents both opportunities and challenges. Xue Nan’s research uses natural language processing (NLP) to develop models that assess the difficulty of these materials, ensuring consistency in listening evaluations. This work modernizes Chinese language teaching by leveraging real-world content, helping learners build practical communication skills while improving test fairness and reliability.

Doctoral award winner Chenyang Zhang

Chenyang Zhang

University of Melbourne

How do policies promoting languages other than English (LOTE) in China's National Matriculation Test (NMT) influence education on the ground? Chenyang Zhang’s research examines this question, exploring the complex ways policymakers, teachers, and students interact with LOTE testing amid China’s push for multilingualism under initiatives like Belt & Road. By developing a model to capture test impact, this study offers valuable insights into addressing resource disparities and reshaping policies to promote equitable multilingual education.


Congratulations to the winners!

We congratulate these exceptional scholars on their achievements and are honored to spotlight their contributions, which not only advance academic understanding but also have real-world implications for making language assessments more accessible, equitable, and effective. Their research not only enriches the field but also has the potential to shape language assessment practices for years to come.

🏆 Read about past winners of this award: 2020, 2021, 2022, 2023

Search