There's no Smut in your story. -5 stars in SmutHub.Okay-okay, Val got the perfect solution. Everyone else should get 1 stars, that way they are all even. And Val should get 5 stars, that way he is always above. Everyone wins.![]()
Hey, it is still a 5.There's no Smut in your story. -5 stars in SmutHub.
This is why we would use ai to generate memes based on your story. Then readers can judge based on the quality of the memes.I'm going to be serious for a second here and reply to this... reply... take. The reason why Steam handles the situation well is because no one looks at likes and dislikes. The first thing you see is a freaking video, a trailer of the game. Sometimes it's a trailer, sometimes it's straight up gameplay video. There are also screenshots. Comparing Steam to SH doesn't work. You don't have a video that gives a reader almost a full understanding of wether they'll like the novel or not. It doesn't work like that. And synopsis doesn't help either. No one looks at synopses in Steam. People watch videos and look through screenshots.
I thought you could review the products you brought on Amazon, not through other vendors. They could and they do keep track of your purchases.This is not true; I've reviewed products that I bought elsewhere, so Amazon had no way of knowing if I owned the product.
This is largely irrelevant, you could add any story on the reading list.I assume you meant rating, not reading, but it's also not true; it needs to be on your reading list.
Not necessarily.That would lead to more vindictive authors. They are already whining that they want negative reviews removed; imagine if they started whining that negative reviewers should be banned.
A bell curve would be a flawed model because in reality, the points of the bell curve are subjective and we cannot assume ceteris parabusOn a more serious note, I think you guys are forgetting another type of raters.
Trolls and honest raters, fuck them. How about we talk about people who have different priority for the average rating and their personal rating?
Let's imagine a series with 4.8/5 average rating. Someone feels it deserves 4/5. In your ideal world (in SH's current rating system), it should be easy, they should just rate it 4/5, according to their personal rating. What if they think average ratings is more important than their own experience? What should they do then, to make the series fall into 4/5 faster?
That's right, by picking 1/5. In an ideal world, 1/5 represents the absolute shit and the shittiest. But this reader doesn't feel that way, for them average rating is the best representation, and they just dislike the series for being overrated. And 1/5 rating is the fastest way to decline it.
Are they a troll? (Maybe) A weirdo? (yeah) An asshole? (Maybe)
You should also know that Steam makes a 'summary' of people's collective likes and dislikes. Overwhelmingly Positive and so on and so on. In your ideal world, readers don't have to bother with worse rating system(s) by just choosing only like/dislike, but what if, like I said, these readers only care about 'average rating'? They see a series with a too much good ratio, so they dislike dump it until it finally reaches the Very Negative like they want.
So any types of [personal] rating scale becomes less significant when the average rating is publicly shown. That's just the way it is.
So what can we do? Either we fully scrap rating system alltogether, or actually adopt [bell curve] grading system for SH (yes, I was being serious previously).
I advocate a completely different approach:The approximate order of operations for *Patent Pending* (JK) "TruescoreSR* (Truescore Story Rating) is as follows for any new score being put up:
Truescore accuracy would be determined based on the accuracy rating of the accounts that are giving reviews (based on their downweight % versus maximum), and averaged with an inverse rule to bring accuracy up with more account reviews.
- Adjust Author score based on their nominal score range - Examples: If they have 7 reviews all in the range of 1-3, adjust scores to be 1-5. If all reviews are the same score, they all default to that score, and a heavy weighting penalty is applied (maybe adjust weight down by 50% to start).
- Apply Genre tag average weighting to rating - Examples: NTR underperforms the average site rating across all NTR stories, as such, scores are adjusted upwards somewhat. Isekai overperforms, and is lowered slightly. This is averaged with a weighting taken by reverse calculating averages across stories every so often (perhaps once a month).
- Edit: Apply an account score weighting based on Genre accuracy as well, some people are better at rating certain genres than others after all (perhaps they keep reading things they hate, idk)
- Apply review count weighting accordingly - Set arbitrary review limit to reach full weighting, and adjust accordingly (lets say 10 ratings gets you to full weight on this variable, and you get 10% more weighting for each rating until reaching it)
- Apply account wide accuracy rating - Arbitrary variable multiplies your rating accuracy vs the average rating. When you're off by too much on multiple stories it applies a small penalty to your account's ratings in the future. This is also weighted based only on stories that have Green Quality Truescore ratings (which indicates a 90% certainty in score based on types of reviews and total count.)
- Periodically adjust and tune parameters provided in these different variable spaces until all numbers work well across genres. With multiple weightings and variables applied on both account and genre levels, it should be easily possible to fine-tune the numbers to encourage good review practices.
That's the gist of my idea.
That's the core of the problem.Let's imagine a series with 4.8/5 average rating. Someone feels it deserves 4/5. In your ideal world (in SH's current rating system), it should be easy, they should just rate it 4/5, according to their personal rating. What if they think average ratings is more important than their own experience? What should they do then, to make the series fall into 4/5 faster?
That's right, by picking 1/5. In an ideal world, 1/5 represents the absolute shit and the shittiest. But this reader doesn't feel that way, for them average rating is the best representation, and they just dislike the series for being overrated. And 1/5 rating is the fastest way to decline it.
Isn't that just RR advertising? I think we had a thread laughing at some of them recently...This is why we would use ai to generate memes based on your story. Then readers can judge based on the quality of the memes.
There is a valid point in here about 1-star ratings having undue influence when someone is vindictive or just outright dislikes the story. I can see that.In the 5-star-rating system, however, the writers are under much larger pressure to reach certain value (and once again, math is on the site of the opposition) while under like-dislike system, there is no such thing.
Instead of going on the lowest possible rating, they would just hit dislike.
(1+5+5)/3=3.6
vs.
Negative+positive+positive=mostly positive
Actually, it may lessen the pressure instead.
Then you know there are five people who liked your story, and five disliked: You will clearly see the views are somewhat balanced and the five haters didn't take that much from you. After all, it should look achievable to get one more like.
However, if you see your story is rated 3.0 and your story needs to be 4.5 you would feel it a more of the struggle to get it there, especially after the other values join in, like twos and fours.
I don't think the problem is actually complicated enough to need a neural network. I think you just have a mousover function that informs people that "Truescore is genre adjusted" as such, the story is roughly rated compared to comparable stories.I advocate a completely different approach:
The goal is not to deduce a single truescore, but separate ratings into meaningful groups.
Imagine a fiction where every normal person rate 2 and every NTR enjoyer rate 4. Then you can present the ratings like this : there are two reader groups that rate this work 2 and 4. That way a reader who self-identify with one group or another can make informed decision on whether to read or not. In reality, you never find such clear cut population.
My problem then revolves around sorting the reader population into groups. For that, I suggest using a machine learning approach.
Now I'm not too sure on the detail, but maybe you can train a neural network "A" to guess the rating a user will give on book "A" by inputting all past ratings for every other book available on SH.
The neural network reduces the high-dimensionality feature space (ratings on every book available) to a lower dimension latent space (reader's profile taste).
In the trivial NTR example, the latent space should show two lumps of users: one for the NTR-enjoyers and another for the normal readers based on their past ratings. That way, book "A" will receive two separate ratings.
This approach can organically create as many groups of readers as needed.
Yes, it's a waste of electricity.
You, as the writer, don't want the story being high-rated.There is a valid point in here about 1-star ratings having undue influence when someone is vindictive or just outright dislikes the story. I can see that.
I also agree that +1/-1 system is probably more "just" to the majority of authors. I don't think the rating system is anything more than it is though, and the differences are marginal, and it's not meant to satisfy authors, it's meant to satisfy readers, and to some degree vindictive readers are readers.
Manipulations happen in either system, easily.
The only way to "simultaneously improve the subjective rating that both authors and readers" give to a rating system is to convolute it heavily. My earlier posts try to point that out. I think a better system possibly exists and is doable. It's complicated and requires lots of fine tuning, and will still alienate some people. Twitter/Youtube algorithms do this. The majority of people probably like it more than a random system, but it actively adjusts perceptions hurting a minority. You only get to choose the hurt minority.
Right now the hurt minority is a subset of authors. The hurt minority in a +1/-1 is a slightly different subset of authors, and a larger subset of readers. In a convoluted system, the hurt minority likely shifts from month to month based on tuning parameters.
I get where you're coming from, but I disagree on a few fundamentals.You, as the writer, don't want the story being high-rated.
You want your story being found.
And supposedly liked by someone. Not liked as a thumb up, liked as in there are readers who enjoy the story.
The necessity to have a good rating adds the unnecessary complexity to the simple goal.
Being read, or having a good story.
It's no longer about the subjective or objective quality, it is about the additional level of complexity of achieving the goal of being read, one which doesn't revolve around writing itself.
Marketing is a skill, and it is a unique skill that differs greatly from creative writing.
Now, with rating, the writers could be easily hampered by the things which are simply beyond their control. Marketing.
That's the reason professional authors don't interact with anyone outside the controlled environments set up by their publishers.
Because the publishers don't want the writer to be the better marketer, but the better writer. Having both skills is rare. Publishers handle marketing, but can't write...
However, amateurs are forced to handle both the writing and the marketing equally well, while the result is outside of the control, as the rating is ultimately crowd sourced to people not invested in their success. In fact, sales and profit are not their motive.
And more important.
You could improve as the writer.
You couldn't improve from the rating which will remain even after you fixed the error that caused it.
Yeah you're right. I was too deep on giving justice for the NTR enjoyers. Implementing bell curve would actually harm more people.A bell curve would be a flawed model because in reality, the points of the bell curve are subjective and we cannot assume ceteris parabus
I went overboard explaining the obvious then.That's the core of the problem.
I advocate a completely different approach:
The goal is not to deduce a single truescore, but separate ratings into meaningful groups.
Imagine a fiction where every normal person rate 2 and every NTR enjoyer rate 4. Then you can present the ratings like this : there are two reader groups that rate this work 2 and 4. That way a reader who self-identify with one group or another can make informed decision on whether to read or not. In reality, you never find such clear cut population.
My problem then revolves around sorting the reader population into groups. For that, I suggest using a machine learning approach.
Now I'm not too sure on the detail, but maybe you can train a neural network "A" to guess the rating a user will give on book "A" by inputting all past ratings for every other book available on SH.
The neural network reduces the high-dimensionality feature space (ratings on every book available) to a lower dimension latent space (reader's profile taste).
In the trivial NTR example, the latent space should show two lumps of users: one for the NTR-enjoyers and another for the normal readers based on their past ratings. That way, book "A" will receive two separate ratings.
This approach can organically create as many groups of readers as needed.
Yes, it's a waste of electricity.
Ironically enough, because there are divisive views on certain tags/genres, making the ratings suffer from it, the need for a fair rating system itself becomes void. That is to say, the implementation of weighted/group segregated rating fuctions for unpopular tags/genres are no longer necessary, because they're now acting like fandoms in AO3.I don't think the problem is actually complicated enough to need a neural network. I think you just have a mousover function that informs people that "Truescore is genre adjusted" as such, the story is roughly rated compared to comparable stories.
Yes, that means it really doesn't have that much meaning because stories that overlap entirely in genres are rare, it's probably closer to accurate for people searching a tag than a straight rating system though. If stories only had like 2 tags, maybe a split system would work better, but there are so many tags you always have to sort of approximate things.
---
Edit: On an added note, we're all talking like steam is the only +1/-1 system, but reddit is a cesspool, and it also uses that system, and is probably a closer approximation to SH than Steam is, so I don't think it's clearcut that +1/-1 is a good system.
okay now I support the bell curve.harm more people
The machine learning suggestion doesn't use tags/genre.Ironically enough, because there are divisive views on certain tags/genres, making the ratings suffer from it, the need for a fair rating system itself becomes void. That is to say, the implementation of weighted/group segregated rating fuctions for unpopular tags/genres are unnecessary, because they're now acting like fandoms in AO3.
I think there are better rating systems. At the same time, I don't care. I got a bad review from an alt because of a forum post I made. You just have to accept that some people are stupid. At some point, you can't let trolls dictate how you feel. The internet is full of trolls, and if I let all of them get to me forever, I'd delete my accounts permanently.