There is no way to know with 100% certainty, but there are tells.
First, and most importantly, more than any single stylistic element, AI text is repetitive. One use of "It's not X its Y" isn't terribly suspicious, but several in a row is. If there's one thing you take away from this post, let it be this.
Em dashes are another example. They can be used in place of just about any punctuation mark, and a large number of them is another red flag.
Another tell I've noticed is an overuse of rule of 3, especially in a very particular way. Again, rule of 3 is very common, but an overabundance is a solid tell. AI text tends to also follow a strict format of short, short, long. Here's an example I stripped from a synopsis: "...is soft-spoken, careful, and used to hiding her sparkly purple eyes and lavender ears behind silence." There are three beats, and the third is noticeably longer. Technically speaking I can't prove this was AI generated, but it really feels that way.
Also in the above example is another hallmark of AI text: strange metaphors. AI doesn't "know" what similes, metaphors, or analogies are, but it "knows" what they look like. These elements make sense if you breeze past them, but begin to fall apart if you look more closely.
Some AI text has a very choppy writing style. This manifests as an abundance of periods, sentences that start with "and" or "but," and sometimes even single words sentences. This style isn't out of place in dialogue, but usually is in narration.
Lastly, AI generated text has a certain vibe to it that is hard to give concrete examples of, but is noticeable, even if not consciously. This usually manifests in important details appearing or disappearing suddenly, or some plot elements getting unbalanced amounts of focus in text. It's almost impossible to give large scale examples, but it's something you can feel out.
Funnily enough, writing errors are a great indicator that something was written by a human. AI models won't misspell words, or use the wrong homophone, or accidentally add an extra space, or drop a word entirely. These mistakes can only happen by human hands.
There are several meta clues you can use too.
A fast upload schedule is can be suspicious. Some people like to build a backlog and then post all at once, but often, if someone is posting daily, or more than once a day, it can be an indicator to look more closely.
Profile activity can be another clue. How new is their account? Does this person comment often? Do they have a reasonable looking reading list? Again, none of these are damning. Having a new account, or being a lurker, or not reading very many fictions are fare from reliable indicators, but in conjunction with other details can increase your certainty.
An absence of AI images is a strong indicator that the work is made by a human. Some authors use AI images as placeholders, or just because they're poor, but if someone is using AI to generate their story, they're also the type of person to use AI to generate their images too. Writiers who pay commission art, or forgo it entirely out of refusal of using AI, almost certainly aren't generating their stories.
This essay's been bouncing around in my head for a while, glad to finally have an excuse to let it out. There are certainly some things I missed, but this will get you most of the way there.