Is AI Ruining Words?

Is AI ruining words? It’s a big question, and maybe an uncomfortable one. It might even feel a bit dramatic. But language is always changing – with culture, technology, and time. That’s nothing new. What’s new about it is the role AI is starting to play. Machines are now producing writing at scale, affecting how we communicate every day. And that is making many of us wonder: is AI changing words in a helpful way, or is it ruining their meaning altogether?

This question matters because it’s happening right now. The debates, the controversies, and the conversations about AI and language are everywhere. You see them online, in workplaces, in the content we consume daily, at the dinner table (especially if you’re in our house). People are asking what AI means for B2B communications and marketing and how it’s shaping the messages we put out into the world. It also matters because we already feel the effects of overused, jargon-heavy content in marketing materials, on landing pages, social posts, and blogs. 

There’s a growing sense that connecting with the content we read is getting harder. We want it to feel authentic and real, like it was written purposefully and matters to someone. We yearn to trust that what we’re reading is honest and meaningful.

So, how is AI changing how we write and communicate in B2B marketing, especially in a world where content is produced faster and en masse? Is it making some words feel tired and hollow, or is it simply reflecting patterns we’ve been following all along?

And maybe the bigger question isn’t whether this needs addressing, but how on earth would we even begin to fix it? Do we rely on more AI to help us write “better,” or is the answer a more deliberate, human-only approach to editing? If words feel overused and meaningless, is that the fault of the technology or how we’re using it? Should every app and every part of business keep adding more AI? Is the solution really more AI and fewer humans? It feels like that’s the direction we’re heading. But again, I ask (because I’m genuinely, honestly wondering) is that really the answer?

These are big questions, and they feel bigger than just whether AI is “owning” certain words. Many of the words tied to these conversations, like empower, optimize, redefine, leverage, elevate, and foster, were already overused and a little cringe-worthy long before AI came along. They’d started to lose their impact, becoming the language we skim past without much thought. What AI has done is turn up the volume, flooding us with these words at a scale we’ve not seen before. AI didn’t create the problem, but it’s made us hyperaware of it. 

How AI Shapes the Words We Use

AI isn’t thinking, and it isn’t clever. It’s a machine. It processes massive amounts of data, learns patterns from what it’s been given, and predicts what might come next. As Emily M. Bender so perfectly puts it, large language models are “stochastic parrots.” They can generate language that sounds plausible, but they don’t understand the meaning behind it.

Large language models, like those powering ChatGPT, are trained on enormous amounts of data, from practically everything available on the web to digitized books and research articles. According to BBC Science Focus, the training data included 570 GB of content from books, Wikipedia, research articles, web texts, websites, and more. This totaled approximately 300 billion words fed into the system to build its initial understanding of language patterns.

These models are incredibly good at producing coherent, even convincing language in response to a prompt. As Rodney Brooks points out, they seem to build an internal understanding of how language works, allowing them to translate between languages or generate reasonable-sounding text on almost any topic. But that’s all it is. Reasonable.

It’s important to remember that this ability is purely mathematical. These models don’t “understand” what they’re creating, they’re recognising patterns. For example, unbeknown to some, language models break sentences into tiny pieces called tokens. Tokens can be individual words, parts of words, or even punctuation marks. The models then analyse billions of these tokens to predict which one should come next in a sentence. This explains why AI-generated writing often lacks depth and purpose.

AI will become more sophisticated in the future – but for now, it’s a glorified probability output experiment. A very expensive, very energy-hungry tech-bro project that we’ve all become willing participants in.

Then there’s an even more specific issue: these models aren’t designed with B2B content best practices in mind. They pull from a massive range of content, everything from news articles to personal blogs to social media posts. It might seem obvious, but it’s important to note that AI isn’t designed to help you create better B2B messaging! That’s why the same words keep showing up. They feel safe and familiar because they’ve been used repeatedly in the kind of content AI was trained on. But just because these words are common doesn’t mean they’re the best choice for your message. 

All this leads to the main problem: AI tends to push us toward what’s generic. It doesn’t take risks or make bold choices. It doesn’t have a sense of what feels unique or personal. Instead, it creates content that is safe and sits comfortably in the middle ground. And while that might be fine for filling space, it doesn’t help us stand out.

For B2B marketers, this is a difficult challenge. Businesses need to stand out, but when everyone is using the same tools and producing the same language, everything starts to sound the same. When content starts to feel predictable or overly safe, it loses its ability to connect. It becomes harder to trust. If your audience feels like they’re reading something churned out by a machine, they’re less likely to buy into what you’re offering.

So, AI is doing exactly what it was built to do – identify patterns and produce plausible language. But that’s all it can do. It can’t think like you. It can’t add your perspective, creativity, or personal touch. And if we rely on it too much, we risk losing what makes our communication authentic and human. 

Why These Words?

Certain words show up repeatedly in AI-generated content, such as “leverage,” “delve,” and “redefine.” They’re common even in Reddit threads about making writing sound less AI-like. It makes you wonder if it wouldn’t be easier to edit in a more authentic way. These words are popular because they’re versatile enough to fit almost any context while still sounding important. That’s why both AI and humans rely on them so often, they create the appearance of sophistication without much effort to explain or clarify.

Delve is meant to convey depth and exploration, the act of digging deeper into a subject or uncovering something meaningful. But it’s been used so generically that it’s lost its impact. You’ll find phrases like “delve into the details” or “delve deeper into insights” everywhere, especially in SaaS marketing. At first, it sounds thoughtful, but with constant repetition, it feels more like a shoe skimming gravel than a drill breaking through magma. It’s too easy to say “delve into powerful analytics” or “delve into smarter workflows,” but these phrases don’t explain anything. They trend, but they don’t clarify.

Then there’s redefine, a word designed to suggest grand transformation. But when everyone claims to “redefine collaboration” or “redefine the future of work,” it quickly loses its oomph. It creates a fleeting sense of importance, but when the moment passes, the reader is left asking: What’s actually being redefined? Why does it matter? It’s a promise without proof.

And leverage? Borrowed from science and transplanted into marketing, it’s a hard word to avoid. “Leverage your data,” “Leverage powerful tools,” “Leverage this opportunity.” But when words like this are overused, they start to feel forced. They try too hard to sound impressive.

But the problem isn’t just the overuse of words like these – it’s what they reveal about our reliance on meaningless language. AI doesn’t exist in a vacuum. It reflects what it’s trained on: us. Years of mediocre copy and convoluted language brought us here. Blaming the tech bros and saying, “Well, they did this,” feels a bit childish. We’re all part of this, turning a handful of buzzwords into a monotonous vernacular and creating a cycle where garbage trains more garbage, slop builds on slop.

So, Is AI Ruining Words?

Yes, and no. It’s picking up hyperbole trends and helping bad habits stick. But it’s not “ruining” them. It’s reflecting the language habits we’ve already built. 

In other fields like journalism, essays, novels, or screenplays, many of these words can still be beautiful, descriptive, and meaningful. When used thoughtfully, they add depth and value. In B2B marketing, though, these buzzwords often feel like greige or duck egg, vague and undefined, when the audience is looking for something clear and universally understood.

The focus in B2B should be on clarity. What does your product, service, or solution actually do? Be honest, and don’t let AI convince you that you need a flashier word for “use” (like leverage). Don’t let the robots nudge you toward hyperbole that might feel satisfying to write but leaves your audience confused. If you don’t want AI to “ruin” your words, start by recognizing the patterns and pushing back with your own thinking. Take the time to think critically about your language and make intentional choices. After all, that’s one of the few things AI hasn’t quite mastered yet.

Previous
Previous

is ‘no AI’ the next big unique selling proposition?

Next
Next

Dyslexia Friendly Mode and The ick App