Never in human history has content been easier to produce. People have used generative AI in countless ways to reclaim their time, beat writer's block, research, and do so much more since November 2022 when ChatGPT dropped.
On the other side, it’s also easier than ever for creators to make generic media. People have been making uninspired content well before the advent of AI. However, this era creates additional challenges for content consumers as there’s the potential for more lackluster work to hit the web. Reading poor-quality and repetitive work isn’t a great experience ever but things get particularly annoying when it’s clear that AI wrote the content and a human didn’t bother to do some light editing.
Being a creator that balances quality over the speed that AI allows is paramount for white-collar content producers — whether you’re a marketer, copywriter, journalist, influencer, or a business that hires any of them. You’ll lose your audience if you consistently churn out dull and lifeless content.
Given all of this, two ideas are going to be vital as we collectively move forward in this era of generative AI:
Anyone producing content with AI’s help needs to know how to edit their work so it’s more valuable than what an AI tool would produce on its own.
On the other hand, content consumers need to know how to spot content rooted in uninspired AI outputs so we aren’t giving time and money to unworthy content producers.
Let’s see what these ideas look like in practice.
Editing Tips to Keep Your AI Content Human (and Valuable)
Look past AI versus human writing for a moment to think about what makes any piece of content engaging. Readability, relatability, specificity, and factuality might be a few of the major elements that immediately come to mind.
The challenge with AI-driven content creation is that outputs sometimes (or even regularly) lack those hallmarks in big and small ways depending on the prompt, the goal of the piece, the tool used, etc.
And the less engaging a piece of content is — no matter how it was produced — the less effective it is. The good news is there’s a lot that AI-assisted content producers can do to ensure their final outputs stand out and aren’t written off for sounding robotic or too generic.
The golden rule: don’t just blindly use an AI output without reviewing and putting a personalized, human spin on it first. If you forget everything else in this article, don’t forget that.
9 tips for AI content editing
1. Strategize to maximize creativity
What do you want your audience to learn or do after reading your content? How can you present that information as uniquely and with as much value as possible? AI is supposed to give us additional time to strategize and think creatively in our content. So, do that! Your final output will be better as a result, I promise.
2. Be an authority
Part of providing value means filling your content with real insight and information — not filler or repetitive ideas that AI can sometimes produce because it lacks depth. Back up your ideas with research and statistics. Get expert opinions if you lack the inherent knowledge. Do the work to show your audience that you know what you’re talking about (or that you can present insights from other experts) to build trust.
3. Consider your audience
Who are you writing for? What are their interests and needs? Build a persona around your ideal reader and write each piece with that specific individual in mind. Feed an AI tool information about your ideal reader so it delivers more accurate messaging.
4. Make the content more human
Share your thoughts, feelings, and experiences in your writing when you can. Don't just settle for the output given by the AI. Add your tone, style, and humor to AI outputs and to your piece at a high level. Use the context of, well, being human to offer examples and anecdotes that an AI could never.
If you need an example of something blatantly non-human, look at how Microsoft’s AI bot wrote that the Ottawa Food Bank was a must-see tourist attraction in the city. A human writer or editor after the fact wouldn’t be so careless (I would hope).
5. Vary your language
Make sure there’s a wide variety of words and phrases in your writing. Remove overly repetitive elements. AI models often generate a lot of unnecessary words that don’t add value to the idea in question. So cut those out.
6. Elevate your sentence structure
AI-generated text often uses a very repetitive sentence structure that makes the text sound monotonous and boring. Vary your sentence structure to make your writing more interesting and engaging.
7. Maintain tone and voice consistency
Once you lock in your tone of voice and writing style, make sure they’re consistent throughout the entire piece.
8. Don’t worry about being perfect
There’s no such thing as a perfect piece of art or media. Sometimes, the little imperfections actually make art special. So don’t try to make your work sound perfect just because you have an AI assistant because your content will seem *wait for it* robotic and less unique.
9. Edit carefully
Don’t skim. Run every output through a fine-toothed editorial comb to weed out awkward word choices, sentences, or anything else that doesn't seem quite right. Optimize and ensure the content flows smoothly and naturally. Always double-check the facts an AI tool provides to ensure that everything is accurate and up-to-date.
By following these tips, generative AI users can create content that’s informative and engaging for their audience because their work will still have a high degree of the irreplaceable human touch.
How Sharp Is Your Eye for AI?
Let’s say you’re given an article and told that a human wrote it with the help of a generative AI tool. How well do you think you could point out the differences between AI- and human-generated copy? In December 2022, a group of researchers at the University of Pennsylvania School of Engineering and Applied Science produced a peer-reviewed study that proved people can learn to spot differences between the two.
“We’ve shown that people can train themselves to recognize machine-generated texts,” said Chris Callison-Burch, Associate Professor in the Department of Computer and Information Science and lead researcher on the project. “People start with a certain set of assumptions about what sort of errors a machine would make, but these assumptions aren’t necessarily correct. Over time, given enough examples and explicit instruction, we can learn to pick up on the types of errors that machines are currently making.”
The study tweaked a unique web-based game, Real or Fake Text?, to offer a realistic representation of how people use AI to generate text. The researchers’ game asked participants to identify specific points in a text where human content changed to AI-generated text. The results showed that participants could accurately detect AI-created text far better than random chance, implying that such text is somewhat detectable. You can play the game to test your skills, and you should!
“We prove that machines make distinctive types of errors — common-sense errors, relevance errors, reasoning errors and logical errors, for example — that we can learn how to spot,” said Liam Dugan, Ph.D. CIS student who assisted with the study.
How to Spot AI-Created Content
Dugan mentioned some key characteristics of AI-generated works, but there are also many more. Detecting non-human text isn’t as easy as plugging copy into an AI content detection tool. In fact, detection tools have repeatedly been proven not to work very well (even OpenAI, ChatGPT’s creator, made one and discontinued it because it was inaccurate).
“It’s possible that in the future, A.I. companies may be able to label their models’ outputs to make them easier to spot — a practice known as ‘watermarking’ — or that better A.I. detection tools may emerge,” wrote New York Times contributor Kevin Roose. “But for now, most A.I. text should be considered undetectable.”
So, we have to rely on our wits to sniff out AI content.
To strengthen your editorial nose, here’s a big list of signs that the copy you’re reading was likely generated by AI.
1. Repetition
AI-generated content tends to use repetitive language, with certain phrases or ideas repeated more frequently than in human writing. Its copy often lacks originality and becomes monotonous after a short while.
2. Lack of context or personal experience
AI doesn’t have personal experiences or emotions. It doesn’t understand emotion or the human experience. It also lacks personal anecdotes and context-specific insights so AI content can often feel impersonal, detached, or nonsensical given the intent of the work (like in the Ottawa Food Bank travel example).
3. Odd phrasing and stilted language
AI can sometimes generate sentences that seem a bit weird or clunky. This can happen for several reasons, like awkward sentence structure, a lack of contractions, and using uncommon words or phrases. Some AI models overuse technical jargon, empty buzzwords, and complex vocabulary inappropriately. Or, they might write in a very formal or academic style, even when the subject matter is informal. All of this can make the content seem less natural.
4. Factual errors and outdated information
The facts provided by content generation tools are generally correct but they can also be wrong, unrelated, or outdated (like in the case of [Chat]GPT 3.5 whose knowledge stops around September 2021). Make sure to fact-check when you can!
5. Lack of creativity or originality
AI models are trained on data already in existence, so they can't conjure original ideas or speak with a unique voice. As a result, AI-generated content can often be predictable and uninspired. Sometimes AI responses can be very long and detailed without saying anything new or meaningful.
6. Inconsistent tone and style
Since AI models are trained on a variety of texts, they’re not always able to maintain a consistent narrative throughout a piece. This can result in abrupt changes in tone or writing style that sound jarring and goofy.
7. Lack of authority
AI tools are incredibly intelligent but they can still lack the authority that content written by an expert might have. Generation tools lack depth and can't go beyond basic facts to truly analyze the nuances of a topic and develop unique insight.
8. Unusual timing
If content is generated and posted at unusual hours or inhumanly consistent intervals, it might be a clue that it's AI-generated.
9. Visual clues (for images)
With images, AI-generated content may sometimes have strange objects, patterns, and inconsistencies that are not typical of human-created visuals. By now, you’ve likely seen or even prompted AI images of “people” with horror-movie-like hands and teeth. AI also creates parts of its images that are blurry while others are more clear. There are textured backgrounds with random brush strokes throughout. There may even be washed-out artist watermarks or signatures from the works used to train the tool.
10. Consider the source
If you are unsure whether a piece of content was generated by AI, consider the source. Is the content coming from a reputable website, organization, and/or author? Or is it coming from a source that is unknown or untrustworthy?
11. Find a typo
AI doesn’t make the same simple mistakes humans do with spelling, grammar, formatting, and the like. An improperly formatted apostrophe, a common word misspelling, or another typo you see in content can sometimes be a good sign that a human was behind the work.
While these indicators can be helpful, they aren’t foolproof. They can differ depending on the tool you use and the content in question.
Skills-Based Judgement Is the Way
No matter how much or little an AI tool helps a creator, that content should always — as in 100% of the time — have some human influence beyond just prompting. AI outputs, whether they be a single sentence or a full blog post, should always get injected with human insight and edits before they’re consumed in any way, shape, or form. Period.
We have to be discerning readers as more creators and businesses lean on generative AI. These tools are known to have biases and inaccuracies and, as we’re starting to see, we can’t always rely on content producers to weed them out. It’s up to us to be skilled enough to spot not only bad AI content but generic, substandard, and misleading work overall.
“People are anxious about AI for valid reasons,” said Callison-Burch. “Our study gives points of evidence to allay these anxieties…I think it’s an important enough issue that we’ll have to teach everyone how to do it.”
On the other hand, content producers using AI need to employ best practices and even better judgment to ensure their content is actually worth trusting and consuming. Being a human-in-the-loop as a creator also means releasing works that are credible and made carefully. The creators who don’t take steps, like the ones in this article, to ensure their AI-assisted work stays human will suffer a long-term lack of engagement with their audience.
Neither readers nor creators should trust and use AI outputs wholesale. But a little discernment and fact-checking on both sides goes a long way. And we’ll need that discernment as AI tools evolve to sound more human-like. However, practicing those skills on the consumer and producer side can allow everyone to tap into the genuine benefits of this powerful technology.
“There are exciting positive directions that you can push this technology in,” said Dugan. “People are fixated on the worrisome examples, like plagiarism and fake news, but we know now that we can be training ourselves to be better readers and writers.”