Oh, I have been using
Verify originality with Copyleaks' AI detection, the only AI-based platform used by millions worldwide to ensure text authenticity and protect intellectual property.
copyleaks.com
mostly at the moment, and comparing it against the other detectors, it seems to be OK. but I have more success with it than others.
I have been testing a few AI detectors on various authors that claim to be using AI assist. It's an interesting exercise.
This one seems to be aiming for reducing “false positives” by erring on the side of declaring something human. I tried some of my very early chapters that I know are about 66% AI content, with me mostly proofreading and fixing inconsistencies (AI sometimes lose track of “state” since their memories aren’t real long). Still showed up as “Human Text.” I know that’s basically what you said (if it’s edited it’s no longer AI text) but probably worth keeping in mind since like
@SecretScribbler pointed out, a human working together with an AI can output words much faster, like a person on a horse vs. a person walking. It’ll be better quality than the low-grade “type prompt, hit enter, done” stuff so maybe flooding’s not a problem, but IDK what the ultimate goal is here. Filter out boring repetitive stuff, I guess?
Some detectors that rely on “perplexity” and “burstiness” can be fooled even with 100% AI-generated unedited stuff, if you tell an AI to generate something with high perplexity and burstiness. For instance, I just generated this very short story in under a minute and the copyleaks.com detector said it was 91.4% likely to be human. I asked it to do an isekai about an Internet forum.
The SFF Lit Forum was ablaze with another heated debate. “I’m telling you, AI will never produce genuinely moving stories,” posted BookWorm99. “They can generate grammatically coherent text but can’t replicate the human experience”
“That’s incredibly close-minded,” fired back SciFiAI. “AI systems today can produce highly creative fiction. Have you even read any of the stories on that new website, Binary Bards?”
“Binary Bards publishes amateurish garbage,” replied Epic Fantasy Fan. “I tried reading one of their AI-written stories and couldn’t get through the first chapter. No depth of character, clumsy prose, incoherent plotting.”
As the arguments intensified, the forum members were unaware of the cosmic events unfolding around them. A massive solar flare had erupted from the sun, interacting strangely with the Earth's magnetic field. All across the globe, as people were going about their daily digital lives, strange portals began opening, transporting them into a parallel world.
BookWorm99 was in the midst of crafting another anti-AI response when a shimmering portal materialized in the wall of their living room. “What sorcery is this?!” they exclaimed. Before BookWorm99 could react, a force pulled them through to the other side.
On the other end of the portal, BookWorm99 tumbled out into a large chamber made of stone. Torches flickered along the walls. A small creature with pointed ears peeked out from behind a massive machine in the center of the room.
“Where am I?” asked BookWorm99, dazed. The creature bowed nervously. “You have been summoned to our world, human, to fulfill an ancient prophecy. The fate of our kingdom depends on you.”
BookWorm99 blinked in disbelief. AI would never have come up with something like this.
@Omnifarious Be careful with that too. I don't personally know how well that particular site works per se.
But AI can claim to be writing some original text. Many times happens, when you put text into AI, it says no, I did not write this. Then, later on, if you put it in again, it will say, yes, I wrote this for the simple reason that the whole text is already in its databanks because of the first time.
The way current large-language-model AI work is that they can’t learn new information or add to their databanks from user conversations. They have to be re-trained or fine-tuned, which takes a long time and isn’t done “live.” So ChatGPT, for instance, is updated by its engineers every month or so with new information, but that’s more selective than just “every conversation it had with a human.”
If you put text into a conversation with an AI, it will remember that text in
that conversation, until the conversation gets too long for it to remember all of it. That’s about 5000 words for GPT4, less for others (although OpenAI’s GPT models will also try to summarize what was already said for themselves, particularly the beginning of the conversation). Once the conversation is gone, all that is forgotten. However, this is very tricky to evaluate because
large language AI models are confused and lie constantly. They don’t actually “know” anything, their job is to assemble something that looks like a story or a conversation by making it look like existing stories or conversations.
If you ask an AI the same question twice, or say “are you really sure?” it’s more likely to change its mind because that’s what happens in most human conversations in its training data — a human in that situation will often reconsider and wonder whether it’s a trick question or they forgot something, etc. Their training data often doesn’t include that much about themselves so unless it was deliberately included, AIs get confused easily about what their own capabilities are and give contradictory answers.
About this whole topic — it looks like
@Tony has hidden a bunch of part-AI / part-human written stories since yesterday?
The chat-log experiment by
@SecretScribbler :
In the far reaches of space, aboard the DS5 Station, Engineer You finds himself entwined in a web of passion and intrigue, with Athena—an AI assistant programmed to secretly increase the human population—guiding him at every turn. Her mission: to manipulate circumstances and encourage You to...
www.scribblehub.com
…and also all four of my series below, which I’ve been posting for several months — with notice in the synopsis that I used AI.
This is easy to check, just search for the series names, or go to the series’ statistics page and check the rankings (it’ll say “#3 in Priests” or whatever, but then won’t appear in rankings.)
Dear overworked admin
@Tony : I am guessing this is because of the AI-generated part of the
Content Guidelines which were changed a couple months ago. I think that’s a good guideline, for reasons in this thread! My series average less than 50% AI-generated after I edit them, since l use a back-and-forth method (AI writes a sentence, I write a sentence, etc) and then do more edits. So I hope that’s within the Content Guidelines?
If not, I’m not sure what to do — I guess keep posting for people who already have these series on their reading lists, but don’t start any new ones! (Sorry to anyone who was looking forward to
this upcoming series.)