AI is the talk of the town in the SEO world of 2024. As you can imagine, the team here at Invalley has received many questions about it in the last few months. While Jasper.ai and other similar AI writing tools existed long before GPT 3 hit the scene, 2023 was the year when both clients and publishers started to question us about the topic. Wanting to know our opinion, our policies, and trying to make sure our content is written by humans.
Well, people have questions, and we here at Invalley want to provide some clear answers. This blog post is meant to serve as an FAQ and a peek into our thought process about AI so far. We want to talk about AI writing itself, how it can be a problem, and discuss our policies regarding AI writing and AI detection.
If you don’t care about all the details, here's the main takeaways from this blog post:
- We are not using AI to produce content.
- AI content will not hurt your rankings. Only bad content will.
- AI detection tools have some reliability issues.
Let's go over some frequently asked questions.
1 - Will using AI hurt my rankings?
At the moment, there is no indication that Google or any other search engine is penalizing sites that use AI content. And there's plenty of anecdotal evidence to show that 100% AI blogs can rank and get hundreds of thousands of visitors per year. Look at this AIcontentfy.com case study as an example.
What remains true is what has always been true: content quality matters. If you start publishing incoherent and spammy content, you can expect your rankings to take a hit. That's true regardless of whether the content was sourced from Fiverr or Chat GPT.
2 - Is Invalley using AI to write articles?
No, we are not. Primarily because it is a risk we don't need to take. Invalley's business model has never been to pump out hundreds of articles a week. And at the volumes we work with, there is no reason to risk upsetting clients and publishers by changing our process.
We've built a strong writing team over the years. We intend to keep using that team.
3 - But your content failed this AI detection test. Why is that?
What you're seeing is a false positive. This has been a widespread problem with the use of AI detectors. And it's one of the reasons why many institutions that tried adopting them in early 2023 have since stopped using the tools.
Open AI — the company behind Chat GPT — also took a crack at launching an AI detection tool. Only to give up on the idea months later due to low accuracy.
This is the part where I'd love to include a link to a study showing the real accuracy of AI detection tools. But the reality is that both AI writing and AI detection are evolving way too fast. Even a study published 3 months ago is already far out of date with current tools.
The novelty of these tools also limit what kind of studies are being run. Most of them focus on AI detection in academic settings. When I found that text optimized for SEO and text under 1000 words are both much more prone to false positives compared to essays or longer blog posts. Originality.ai's support page has a section on how "formulaic content" may trigger false positives more often.
4 - What's your approach to dealing with false positives?
At first, we tried running every article by an AI detector to make sure none of them were being flagged as AI-written. The problem with that idea is that AI detectors often don't agree with each other.
What passes one test may fail another. As a result, we had cases where our AI detector cleared an article, but the tool used by the publisher marked the same content as AI-written.
So rather than trying to choose one AI detection tool as our main one, our current policy is to only use these tools when requested. If a publisher or client says they want their content to pass a specific test, we'll make sure it does. And we'll edit or rewrite any content that gets a false positive on that test.
This gets us out of having to argue over which tool is better or more accurate. Whatever AI detection tool you trust, we'll use the same one to handle your content.
We'll be monitoring the field and adjusting this policy as needed. If any AI detection tool comes out as an accurate industry standard, Invalley will adopt it.
5 - Will false positives hurt my rankings?
There is no indication that search engines are using AI detection to determine rankings. So whether or not an article passes AI detection is irrelevant; both human and AI content are being judged on quality alone.
That said, it is fair to worry about the future. You may be thinking "If search engines decide to start punishing AI content in a few months, wouldn't it be better if all my articles could pass AI detection?"
That thinking has some logic to it. The problem is that we don't know what AI detection methods will be used in this hypothetical future.
There is little reason to believe that the tech giants behind the top 3 search engines will use a commercially-available AI detection tool. And passing or failing a current test doesn't tell you whether that content will pass or fail a future test developed by Google or Microsoft.
6 - What AI detection tools — if any — do you recommend?
I recommend not bothering with the whole concept. As I said, if the content is well-researched and well-written, there is little need to care who wrote it.
That said, of the tools I tested, I liked Originality.ai the most. It wins points from me on transparency — rather than boldly claiming they're certain something was written by AI, the tool scores each individual section of the text based on how likely they believe it is that AI wrote it.
They're also transparent about what may trigger a false positive. Which is much appreciated.