The vanillafication of prose (or “Think before you AI before you publish”).

TL;DR – “The solution isn’t to abandon AI altogether—it’s to use it wisely. By treating LLMs as assistants rather than substitutes, writers can focus on injecting their unique perspectives and voices into their work. After all, great writing isn’t just about delivering information; it’s about building connections, challenging ideas, and sparking inspiration.

Richard Quin (reproduced from LinkedIn post on 30 Jan, 2025)

6/29/20254 min read

The likes of ChatGPT and Claude.ai are incredibly useful tools if you’re in the business of professional writing, or maybe fancy the idea of joining the ranks of LinkedIn Authors and thought leaders.

However, as with any advanced tool, you really need to understand how they work if you want to get professional results. Slide into my DMs if you’re looking for slightly too short lengths of decking timber with that ‘hacked with a bread knife’ aesthetic.

December was obviously a time of reflection for many. LinkedIn was awash with thought pieces, 2024 reflections, 2025 resolutions and predictions, along with gems like “what planning my summer holidays taught me about leadership” (that last one is a fictional example).

As I scrolled through LinkedIn I couldn’t help noticing how similar much of what I was reading sounded, in terms of both language and structure. So, I did what any AI nerd would do and turned ChatGPT on itself by prompting it with an unstructured word salad of observations and gripes, and asking it to draft an article for me.

The title is my own, the rest of this article is the unedited output, including the accompanying image (prompt by ChatGPT, generated by DALL-E).

ChatGPT is surprisingly self-aware.

-----------------------------------------------------

In the end, human creativity remains the ultimate antidote to the vanillafication of prose.”Over the past year, a curious trend has emerged across LinkedIn and other professional platforms. Articles that once carried the distinct voices and perspectives of their authors have begun to take on a uniform, almost cookie-cutter feel. Many of these pieces—structured with precision, yet devoid of flair—bear the unmistakable hallmarks of generative AI tools like ChatGPT.

While these tools promise efficiency and scalability, the influx of AI-generated content has led to what I call the "vanillafication of prose." In the pursuit of mass-producing thought leadership, many writers have unwittingly sacrificed originality, engagement, and the power of authentic storytelling.

This article examines the challenges of AI-generated content, focusing on how the reliance on language models (LLMs) is eroding the uniqueness and impact of professional communication.

The Predictable Template of AI Writing

One of the most identifiable traits of AI-generated articles is their rigid adherence to a predictable structure:

  1. Introduction: Sets up the premise with a broad observation or a relatable anecdote.

  2. Subheadings: Breaks down the topic into digestible parts, often with generic titles like "Why This Matters" or "The Benefits and Risks."

  3. Summary/Conclusion: Recaps the main points and ends with a call to action, often generic and uninspiring.

This formula, while serviceable, often lacks the personality and nuance that human authors bring to their work. The result? Writing that feels polished but flat—like a symphony played entirely on a single note.

Losing the Spark of Originality

AI excels at synthesizing information from existing sources, but this strength is also its greatest limitation. By design, language models generate content by remixing what they’ve learned from vast datasets. This means their output often regurgitates conventional wisdom or well-trodden ideas without offering fresh perspectives or groundbreaking insights.

The problem with this is twofold:

  1. Readers encounter nothing new, making the content forgettable.

  2. Writers risk losing their own voice in the noise of AI-assisted sameness.

Originality—whether in perspective, tone, or approach—is what distinguishes truly impactful writing. Without it, content becomes indistinguishable from the rest.

Insights Without Depth

While AI can summarize trends and surface common patterns, it struggles to replicate the deep, context-driven insights that human experts bring to the table. Thought leadership requires the ability to draw connections between disparate ideas, challenge assumptions, and offer bold predictions.

For example, a seasoned executive might write about their unique approach to scaling a company during an economic downturn, weaving in personal anecdotes and lessons learned. An AI, by contrast, might generate a stilted, generic list like "focus on cost optimization" or "prioritize high-value projects" without adding the depth of lived experience.

The absence of original insights undermines the credibility of the author and diminishes the value of the content.

The Voice Doesn’t Sound Quite Right

Another telltale sign of AI-generated writing is the unnatural language it often employs. While LLMs are capable of crafting grammatically correct sentences, they can also produce phrasing that feels out of step with the author's usual tone or vocabulary.

For example, a tech-savvy startup founder might naturally use conversational language, peppered with industry slang and cultural references. An AI, however, might generate a stilted, overly formal piece that feels like it came from a corporate handbook.

This disconnect can alienate the writer's audience, who are likely familiar with their authentic voice. Worse still, it can make readers question whether the author even wrote the piece.

Readers Can Spot the Lack of Authenticity

Perhaps the most troubling consequence of AI-generated content is its impact on audience engagement. Readers are remarkably perceptive when it comes to detecting authenticity—or the lack thereof. When content feels generic or impersonal, it fails to connect on a meaningful level.

The irony is that the tools meant to save time and expand reach can ultimately have the opposite effect. If an audience suspects that a piece has been churned out by an AI, their trust in the author—and the organization they represent—may erode.

In an age where attention is scarce, and trust is everything, disengagement is a cost that few professionals can afford.

Conclusion: Balancing AI and Authenticity

The rise of AI-generated content poses a profound challenge to professional communication. While tools like ChatGPT offer convenience, their reliance risks reducing writing to a series of predictable patterns, stripping it of the originality, insights, and authenticity that audiences crave.

The solution isn’t to abandon AI altogether—it’s to use it wisely. By treating LLMs as assistants rather than substitutes, writers can focus on injecting their unique perspectives and voices into their work. After all, great writing isn’t just about delivering information; it’s about building connections, challenging ideas, and sparking inspiration.

In the end, human creativity remains the ultimate antidote to the vanillafication of prose.