If you’ve been tempted to let ChatGPT write your Valentine’s Day message or help craft an apology to someone you care about, new research suggests that’s a genuinely bad idea—even if the result sounds perfect.
Psychologists at the University of Kent surveyed 4,000 people and found that using AI for personal communications like love letters, apologies, and wedding vows consistently made the sender look worse. Participants judged AI-assisted writers as less considerate, less genuine, less reliable, and more apathetic—and that was true even when the AI-generated content was high quality and the person disclosed they’d used AI to write it.
The research was conducted as part of the Trust in Moral Machines project, a collaboration between the University of Kent and the University of Exeter, and has been published in the journal Computers in Human Behavior.
What the Research Actually Found
Across six experiments, the pattern held consistently. When people knew a personal message had been outsourced to AI, they interpreted it as a signal that the sender didn’t value the relationship enough to invest real effort. The output didn’t matter as much as the process. A beautifully written AI love letter was still judged more harshly than an imperfect human one.
Dr. Scott Claessens, one of the study’s authors, explained that people don’t just evaluate what you say—they evaluate how you said it and what that reveals about how much you care. Dr. Jim Everett added that using AI signals you’ve decided the task isn’t worth your time, regardless of how efficient or capable the technology is.
The backlash was strongest for what the researchers call “socio-relational tasks”—personal messages, emotional communications, expressions of love or remorse. Practical tasks like using AI to write a recipe or organize a schedule triggered far less negative judgment. The distinction makes intuitive sense: nobody expects you to pour your heart into a grocery list.
The findings validated something many people already feel instinctively. Reddit communities have erupted in criticism when users admitted to having AI write their wedding vows. The research essentially confirms that reaction isn’t just snobbery—it reflects something real about what personal communication is supposed to signal.
What Real People Think
The study’s findings mirror what ordinary people say when you ask them directly. Jacqueline McKenzie from Tunbridge Wells said she’d reject AI for any personal communication outright. Liam Goodhew from Bexley put it simply when talking about his partner Paige: she’s “worth more than that.” Reza Jafary said Valentine’s messages need to come from the heart—the source matters as much as the sentiment.
These reactions aren’t technophobia. Most people use AI tools regularly for work tasks without any moral qualms. The line gets drawn at emotional communication, where the effort itself is part of the message.
Interestingly, the research echoes a satirical point George Bernard Shaw made back in 1898 in his play Candida, which questioned whether machine-generated romance could ever carry genuine sincerity. Shaw was writing about a very different kind of mechanization, but the underlying concern about authenticity in emotional expression turns out to be remarkably durable.
The Broader Implications
The researchers note that the judgment extends well beyond romantic contexts. Apology emails, condolence notes, thank-you messages—anything where the recipient understands that effort signals care gets evaluated through the same lens. If someone realizes you used AI to write their condolence card, the comfort it was meant to provide largely evaporates.
There’s also a professional dimension worth considering. People who publicly celebrate using AI for everything, including personal communications, may be signaling something unintended to colleagues and connections. The same logic that makes an AI love letter feel hollow can apply to workplace relationships where trust and genuine engagement matter.
The timing of the research is pointed. It arrives as AI capabilities are advancing rapidly, with tools becoming increasingly capable of producing emotionally resonant writing that’s difficult to distinguish from human output. The study essentially argues that technical indistinguishability isn’t the point—the knowledge that AI was involved changes how the message is received regardless of quality.
The Trade-Off Is Real
None of this means AI writing tools lack value. For professional documents, technical content, practical communication, and countless other tasks, AI assistance is genuinely useful and carries no meaningful social penalty.
The specific cost identified in this research is narrow but important: using AI for personal emotional communication saves time while risking something harder to recover—the perception that you don’t care enough to be bothered. In relationships where that perception takes hold, no amount of technically impressive prose makes up for the deficit.
The researchers aren’t saying AI can’t write a good love letter. They’re saying that a good love letter isn’t just about the words. It’s about the evidence, embedded in those words, that someone chose to sit down and think carefully about another person. That evidence disappears the moment the writing is delegated—and people notice.
This Valentine’s Day, the most sophisticated thing you can do might be to put the AI away and just write something yourself. It doesn’t have to be perfect. In fact, according to this research, the imperfections might be exactly the point.




