Is This Copy Human? Telltale Signs of AI Writing

Lauren Perna with a sign that says, "ChatGPT Can't Replace This"

In 2023, generative AI had a breakthrough moment. That’s when everyone began to wonder, “Will AI take our jobs in the future?” 

Heck, we even pondered this very question ourselves as writers are prime targets for AI’s supposed path of destruction. Ultimately, we felt that our jobs were safe, “because there’s always a place for the human voice in writing.” (Still true, of course).

In 2024, gen AI’s moment exploded into a cultural phenomenon. That’s when everyone began giving ChatGPT prompts with the same exuberance as my 4-year-old nephew giving Alexa song requests. Who wasn’t a victim of ChatGPT’s witty overview of your digital presence or motivational rants from a drunk best friend?

Even from early 2024 to late 2024, there was an uptick in gen AI usage. McKinsey’s March 2025 “The State of AI” report found that, “71 percent of respondents say their organizations regularly use gen AI in at least one business function, up from 65 percent in early 2024.” Take a wild guess what that one business function was

It’s not even really a secret, though, because today’s marketer is proudly using AI. SurveyMonkey found that 43% of marketers think AI is important to their social media strategy. While CoSchedule found that 85% of marketers surveyed are leveraging AI writing or content creation tools to enhance their marketing

Today, as we are just about halfway through 2025, the moment-turned-marvel is still on the up-and-up with no sign of plateauing. In fact, the CoSchedule survey also found that 85.84% of marketing professionals plan to increase their use of AI technologies over the next 2-3 years, with nearly half (46.80%) anticipating a significant escalation.  

While some of us have adapted more begrudgingly than others, AI has undoubtedly shifted how we work. It’s still very much a “get onboard or get left behind” mentality. Generative AI tools allow marketers to be more productive, efficient, and impactful. They have become a normal part of the content marketing workflow. When marketers can save resources, they can focus on…drumroll please….more marketing, which is ultimately good for business.

The other good news is that the fear of machines taking over our jobs as writers isn’t as prevalent. In fact, I’d argue the need to infiltrate AI-generated content with the human touch is more important than ever. But, it goes beyond making sure the content doesn’t sound like a robot—our job as digital marketers is now to make sure the content is original, factual, non-biased, and non-robotic.

The Problem of AI in Content Writing

Herein lies the problem of using AI in content writing—it’s evolving at a rapid pace. We are still learning how to weave AI into our everyday lives, especially as marketers and writers. What we said about it in 2023 is still relevant, but only scratches the surface compared to what we know today. 

And what we know today is that the use of AI in writing has both significant benefits and drawbacks. For as many productivity-saving measures it brings on the upstream, there are just as many necessary fact-checking measures on the downstream side. 

Whether you use AI in a soup-to-nuts fashion (please don’t!) or simply in the ideation phase, you need to make sure what you’re putting out there is actually true. Because AI is trained on existing data, there’s a chance that the data can be misinterpreted and misrepresented. The responsibility for fact checking lies with the content creator. Otherwise, we are only feeding the algorithms more inaccurate information. 

This is, of course, a serious concern in any industry, but especially in our world of life sciences. In the life sciences and healthcare ecosystem, monitoring AI usage is more important than ever. The internet already has a misinformation problem, and it’s only exacerbated by AI receiving more erroneous data. 

In an article called “Ethical Use of Artificial Intelligence for Scientific Writing: Current Trends,” author Ellen Chetwynd, PhD MPH BSN IBCLC touches upon these concerns while providing the following guidance: 

Given the risk of errors or biases when using AI, it is essential that all the work that is generated using these LLMs [large language models] is checked and validated by human researchers.

The Ethics of AI Writing and Large Language Models (LLMs)

When we talk about concerns with AI writing–we can’t just talk about erroneous data or misinformation. We also need to address the question on everyone’s mind: Is AI writing ethically acceptable? 

The first ethical concern of AI writing is transparency, or lack thereof. Authors should identify that they’ve used AI in their writing; otherwise, they are misrepresenting their role in the content. Yet, some companies and writers have no issue with claiming full authorship. These people are capitalizing on the gray area of using gen AI. 

We are taught not to plagiarize as early as elementary school. Somehow, that learning has gone out the window because AI doesn’t feel like your traditional “copy and paste” plagiarism job. This begs the question: can someone claim to be the author of an article written with the heavy hand of gen AI? The only acceptable solution is to acknowledge the support and use of AI in your article resources. 

Bias is another ethical concern in AI writing. ChatGPT and similar models are what we call LLMs or large language models–advanced artificial intelligence models that learn human language by training on massive datasets. Those massive datasets are created by humans who are often naturally biased (whether or not they mean to be, that’s a different story). 

The result is that the language created by generative AI could be racist, sexist, and non-inclusive.  For example, the concept of inclusivity in language is still fairly recent, and older texts used in the AI learning process likely have non-inclusive language. 

However, the problem of bias goes deeper than language choice. The ideas formed by gen AI could be built upon biased concepts. That’s not a topic we can even do justice to in this article, but it’s important to note that bias is so embedded into our culture that we aren’t even aware of it. Therefore, we are likely also unaware of gen AI’s biased tendencies—all the more reason to be transparent about your AI usage. 

Companies are recognizing the importance of setting parameters and guidelines around AI usage. More than 38% of marketers report that their organization has AI guidelines in place. Ideally, we’ll get to a point where an AI policy is embedded into all companies’ SOPs. 

Top Signs of AI-generated Text

The ethical concerns of gen AI are what we might call “hidden” issues—they aren’t apparent on the surface, but they are still there. On the other hand, we have the “telltale” signs of AI-generated text. 

At this point, we know that unnatural phrasing, too many keywords, and unnecessary use of the em-dash are clear signs of AI in writing. But, as we begin to see gen AI used more in the real world, we’ve identified a few more common indicators of AI authorship. 

Each of these alone may not be a sign of AI, but together they typically indicate the use of AI in some capacity.

Generic Writing and Clichés

Here’s the truth: a vast majority of human-written content is generic and clichéd. Of course, there is some incredible content writing out there, but so much of it is just mediocre. Since AI learns based on what’s already out there, AI-generated text is bound to sound more generic than if you were to hire a Boston-based writing firm, say. 

On the other hand, if a piece sounds a little too perfect, that could be another sign. Part of being a reader in today’s digital world is the acceptance that humans make mistakes.

Lack of Voice and Personality

One of the best compliments we can get from a writing client is being told an article sounds exactly like they wrote it themselves. Yes, that’s the point of hiring exemplary writers–we can understand your voice and learn how to write like you.  

Oftentimes, gen AI content lacks voice and personality. Either that or it includes multiple voices throughout the document. Even if the information is (mostly) correct, without your consistent brand voice, the content will just fall flat. 

Absence of Personal Experience or Insight

On that same note, AI content rarely includes personal stories or insights. Even if the content is well-written, without meaningful quotes or personal anecdotes, the content feels inauthentic. If authenticity is the name of the game in today’s marketing, why would you publish an article that lacks a personal touch? 

Not to mention, content without a personal touch risks becoming unmemorable–a content writer’s biggest fear. Content is memorable because it stirs emotion and incites a reaction. When content fails to do either of those things, it’s just words on a screen. 

Repetitive Language and Patterns

One of AI’s Achilles heels is the inability to self-edit. When humans write, we typically recognize when we’ve been repetitive or redundant. This could mean using the same word over and over again, saying the same thing several ways, or organizing our thoughts in the same pattern (i.e., starting each paragraph the same way). Think about any time you’ve read your own work in your head–don’t you immediately notice when you’ve used the same word or said something twice?

Because AI isn’t actually a human, it only mimics human language, it does not have the same ability to recognize when it’s been repetitive. So if you see an article overusing a certain word or repurposing the same sentence, it likely has not been edited by a human.

Formulaic Structures

ChatGPT is a one-trick pony, using the same structure time and again. These days, it’s easy to spot content created by ChatGPT because it has one of the following formulaic sentence structures:

  • Emojis before headlines and blue diamonds for bullet points

  • Bulleted structure (two to three words of bolded text followed by a colon and generic explanation)

  • Adjectives or other qualifiers before nouns

  • Sentences with correlative conjunctions (i.e., “not only this but also that”)

  • And for good measure, it’s worth mentioning the saddest one of all: overuse of our beloved em-dash (brb, crying about this)

Factual Inaccuracies and Hallucinations

As we’ve discussed above, one of the biggest fears about AI-generated text is the dissemination of inaccurate information. Sometimes the incorrect facts and claims aren’t inaccurate per se, just outdated. For example, if data on a certain topic is updated year-over-year, AI might pull data from an older report. Othertimes, the inaccuracies are a result of AI misinterpreting the data. 

And sometimes, they’re just complete “hallucinations.” According to IBM, “AI hallucination is a phenomenon where, in a large language model (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”

This is quite scary, especially since humans can’t always detect when it happens. Or if they do, it’s likely after the work has been completed or at the very least started. 

Appropriate Ways to Use AI

You might think at this point, we’re going to wrap it up by saying you should forget AI and hire us as writers (that’s in the next section!). But the truth is, we recognize the value in embracing AI as a tool and resource, not a replacement for good writing. We use it ourselves to help ideate, strategize, and edit. For us, the best use of ChatGPT is finding resources and summarizing articles. This allows us to save time on the never-ending rabbit holes that writers can get themselves into when researching an article. 

We also use it a lot when we’ve created a masterpiece and want to get a pat on the back. Just kidding. Well, sort of. It’s helpful to put our writing prompt into ChatGPT along with our finished work and ask for feedback. Of course, it’s great to read, “This is a strong piece, Lauren." But, when the feedback is spot on, it’s helpful to see where we could strengthen the work. The key here is recognizing that the feedback can be misaligned so going with your gut is critical. 

Furthermore, we will always champion AI for time-saving efforts in personalization. For example, LinkedIn’s built-in AI tools for content ideas, commenting, messaging, and ads are effective ways for companies to harness LinkedIn without creating (too much) more work.

Content Writing the Old-Fashioned Way

One case for content writing the “old-fashioned way” is the prevalence of AI detectors. In an effort to combat AI-generated content, Google and the like have implemented tools to detect writing with telltale signs of AI writing. Some have claimed that AI has also led to the “death of SEO,” but this is a premature reaction. SEO remains a foundational strategy for any business seeking to build their online presence, connect with their target audience, and drive meaningful engagement.  

Besides the need to abide by the rules of almighty Google, the other case for using a human writer is that it fails at the most basic purpose of content marketing. Content marketing is designed to generate brand recognition by connecting the reader to the author. If you don’t connect with a piece of content, you certainly aren’t going to remember the author or brand behind it, never mind even consider buying from them.

If you want indelible content that connects with your audience and keeps them coming back, then reach out to our team today. 

Next
Next

Content Marketing for Life Sciences: Tips and Tricks for 2025