AI Admissions Essays Align with Privileged Male Writing Patterns

Summary: Researchers analyzed AI-generated and human-written college admissions essays, finding that AI-generated essays resemble those written by male students from privileged backgrounds. AI essays tended to use longer words and exhibited less variety in writing style than human essays, particularly resembling essays from private school applicants.

The study highlights concerns about the use of AI in crafting admissions essays, as AI may dilute a student’s authentic voice. Students are encouraged to use AI as a tool to enhance, not replace, their personal narrative in writing.

Key Facts:

  • AI-generated essays most resemble writing from privileged, male students.
  • AI essays used longer words and showed less variety than human-written ones.
  • Students are advised to use AI to enhance personal expression, not replace it.

Source: Cornell University

In an examination of thousands of human-written college admissions essays and those generated by AI, researchers found that the AI-generated essays are most similar to essays authored by students who are males, with higher socioeconomic status and higher levels of social privilege.

The paper, published in the Journal of Big Data, also found the AI-generated writing is also less varied than that written by humans.

“We wanted to find out what these patterns that we see in human-written essays look like in a ChatGPT world,” said AJ Alvero, assistant research professor of information science at Cornell University and co-corresponding author of the study. “If there is the strong connection with human writing and identity, how does that compare in AI-written essays?”

Alvero and the team compared the writing style of more than 150,000 college admissions essays, submitted to both the University of California system and an engineering program at an elite East Coast private university, with a set of more than 25,000 essays generated with GPT-3.5 and GPT-4 prompted to respond to the same essay questions as the human applicants.

For their analysis, the researchers used the Linguistic Inquiry and Word Count, a program that counts the frequencies of writing features, such as punctuation and pronoun usage, and cross-references those counts with an external dictionary.

Alvero and the team found that while the writing styles of large language models (LLMs) don’t represent any particular group in social comparison analyses, they do “sound,” in terms of word selection and usage, most like male students who came from more privileged locations and backgrounds.

For example, AI was found on average to use longer words (six or more letters) than human writers. Also, AI-generated writing tended to have less variety than essays written by humans, although it more closely resembled essays from private-school applicants than those from public-school students.

Additionally, humans and AI tend to write about affiliations (with groups, people, organizations and friends) at similar rates – despite the AI not actually having any affiliations.

As LLMs like ChatGPT become more popular and more refined, they will be used in all sorts of settings – including college admissions.

“It’s likely that students are going to be using AI to help them craft these essays – probably not asking it to just write the whole thing, but rather asking it for help and feedback,” said Rene Kizilcec, associate professor of information science at Cornell and co-author of the paper.

“But even then, the suggestions that these models will make may not be well aligned with the values, the sort of linguistic style, that would be an authentic expression of those students.

“It’s important to remember that if you use an AI to help you write an essay, it’s probably going to sound less like you and more like something quite generic,” he said. “And students need to know that for the people reading these essays, it won’t be too difficult for them to figure out who has used AI extensively. The key will be to use it to help students tell their own stories and to enhance what they want to convey, not to replace their own voice.”

About this AI and LLM research news

Author: Becka Bowyer
Source: Cornell University
Contact: Becka Bowyer – Cornell University
Image: The image is credited to Neuroscience News

Original Research: Open access.
Large language models, social demography, and hegemony: comparing authorship in human and synthetic text” by AJ Alvero et al. Journal of Big Data


Abstract

Large language models, social demography, and hegemony: comparing authorship in human and synthetic text

Large language models have become popular over a short period of time because they can generate text that resembles human writing across various domains and tasks. The popularity and breadth of use also put this technology in the position to fundamentally reshape how written language is perceived and evaluated.

It is also the case that spoken language has long played a role in maintaining power and hegemony in society, especially through ideas of social identity and “correct” forms of language.

But as human communication becomes even more reliant on text and writing, it is important to understand how these processes might shift and who is more likely to see their writing styles reflected back at them through modern AI.

We therefore ask the following question: who does generative AI write like?

To answer this, we compare writing style features in over 150,000 college admissions essays submitted to a large public university system and an engineering program at an elite private university with a corpus of over 25,000 essays generated with GPT-3.5 and GPT-4 to the same writing prompts.

We find that human-authored essays exhibit more variability across various individual writing style features (e.g., verb usage) than AI-generated essays. Overall, we find that the AI-generated essays are most similar to essays authored by students who are males with higher levels of social privilege.

These findings demonstrate critical misalignments between human and AI authorship characteristics, which may affect the evaluation of writing and calls for research on control strategies to improve alignment.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.