The Layman's Understanding of AI's Value is so Horrifically Bad

Tyranomaster

Guy who writes stuff
Joined
Oct 5, 2022
Messages
746
Points
133
According to natural selection, people inclined to blind trust shouldn’t survive, but we see the opposite. Why?
This is the opposite of true. People inclined to blind trust have a much higher chance of survival than the highly skeptical. Instincts evolved before higher comprehension, and learning from a parent evolved before society did. In 99 out of 100 cases, blind trust in information from other humans and tribalism causes you to survive when you would die if you were skeptical. Blindly believing information from other humans that speak your language is fundamentally ingrained in our DNA quite literally down to the fact how our language center integrates with the rest of our brain. Its a feature, not a bug.

This is coming from me, someone who is highly skeptical of most information, with a double degree in stem fields and field experience as an engineer. I've spent most of my life learning how to be properly skeptical while still being sociable when I need to be. Something something pillars of sand.
 

HisDivineShadow

Well-known member
Joined
Apr 22, 2025
Messages
312
Points
63
This is the opposite of true. People inclined to blind trust have a much higher chance of survival than the highly skeptical. Instincts evolved before higher comprehension, and learning from a parent evolved before society did. In 99 out of 100 cases, blind trust in information from other humans and tribalism causes you to survive when you would die if you were skeptical. Blindly believing information from other humans that speak your language is fundamentally ingrained in our DNA quite literally down to the fact how our language center integrates with the rest of our brain. Its a feature, not a bug.

This is coming from me, someone who is highly skeptical of most information, with a double degree in stem fields and field experience as an engineer. I've spent most of my life learning how to be properly skeptical while still being sociable when I need to be. Something something pillars of sand.
I didn’t do any research, I just followed logic. Take the people who died in cults, for example. They blindly trusted their leader and ended up dead.
But if they had used critical thinking, they might have avoided that outcome.
And there are plenty of cases like that where blind trust led to death.
 

Tyranomaster

Guy who writes stuff
Joined
Oct 5, 2022
Messages
746
Points
133
I didn’t do any research, I just followed logic. Take the people who died in cults, for example. They blindly trusted their leader and ended up dead.
But if they had used critical thinking, they might have avoided that outcome.
And there are plenty of cases like that where blind trust led to death.
You don't know what you don't know is the wisdom of tradition. Those people may have lived, sure. In the past though, ignoring the traditional folklore of the monster in the woods (and why we don't hunt over there) results in you being eaten by a tiger. Those who believe survive more often than those who would go venture against the advice of those older than them.
 

HisDivineShadow

Well-known member
Joined
Apr 22, 2025
Messages
312
Points
63
You don't know what you don't know is the wisdom of tradition. Those people may have lived, sure. In the past though, ignoring the traditional folklore of the monster in the woods (and why we don't hunt over there) results in you being eaten by a tiger. Those who believe survive more often than those who would go venture against the advice of those older than them.
This discussion reminded me of Robert Sapolsky’s lectures.
About the topic: I agree with your points.
But you’re applying them a bit too literally, I think.
In the end, it’s the most flexible who survive.
 

Tyranomaster

Guy who writes stuff
Joined
Oct 5, 2022
Messages
746
Points
133
This discussion reminded me of Robert Sapolsky’s lectures.
About the topic: I agree with your points.
But you’re applying them a bit too literally, I think.
In the end, it’s the most flexible who survive.
The whole field of evolutionary biology (and specifically cognitive evolution) is a mire. The statistical outcome is what we have to go from. Young men are the most biologically disposable, and they just so happen to also get a concoction of hormones at that time that makes them the most distrustful of tradition and wish to rebel. This can also be traced to the same analogy that I told of the monster in the woods. 9 times out of 10 they die. 1 time out of 10 they hit a resource jackpot as the "monsters" have moved on. If a troop leader or female died in that way, it'd be detrimental, but a young man, the risk reward is different. One could extrapolate that this is why young people in political systems tend to be revolutionary, but once the hormones die down and have more to lose, they elect for the status quo. Hormones (and hormonal imbalances) have way more influence on our perception of reality than we like to think. All evolution is a messy business though, chance events can have wild influence on smaller positions.
 

HisDivineShadow

Well-known member
Joined
Apr 22, 2025
Messages
312
Points
63
You guys still believe in evolution???
I have a portrait of Darwin hanging above my bed.
The whole field of evolutionary biology (and specifically cognitive evolution) is a mire. The statistical outcome is what we have to go from. Young men are the most biologically disposable, and they just so happen to also get a concoction of hormones at that time that makes them the most distrustful of tradition and wish to rebel. This can also be traced to the same analogy that I told of the monster in the woods. 9 times out of 10 they die. 1 time out of 10 they hit a resource jackpot as the "monsters" have moved on. If a troop leader or female died in that way, it'd be detrimental, but a young man, the risk reward is different. One could extrapolate that this is why young people in political systems tend to be revolutionary, but once the hormones die down and have more to lose, they elect for the status quo. Hormones (and hormonal imbalances) have way more influence on our perception of reality than we like to think. All evolution is a messy business though, chance events can have wild influence on smaller positions.
If someone says there’s a monster in the forest and one person doesn’t believe it and goes into the forest and never comes back.
Then another goes and also doesn’t return, then if a third person decides to go, that’s just stupid.
Because at that point it’s no longer about blind trust
It’s about a lack of critical thinking.
I’m totally lost.
 
Last edited:

CharlesEBrown

Well-known member
Joined
Jul 23, 2024
Messages
4,565
Points
158
I have a portrait of Darwin hanging above my bed.
The Naturalist or the X-Men character? Both?
@Grok is this true?

Edit: Bloody hell, I meant for this thread reply as a sarcastic joke (towards AI proof soiboiis), but turns out, we already have Grok (Twitter's AI) here.
Think they made a mistake giving Grok a female voice - it should sound like Valentine Michael Smith...
 

Tyranomaster

Guy who writes stuff
Joined
Oct 5, 2022
Messages
746
Points
133
If someone says there’s a monster in the forest and one person doesn’t believe it and goes into the forest and never comes back.
Then another goes and also doesn’t return, then if a third person decides to go, that’s just stupid.
What if these three people are each 200 years separated from each other and have never met anyone who knew the people who disappeared directly, having only heard stories each time? It isn't necessarily stupidity to test it, but nor is it anything more than blind trust from people without first hand experience. Evolution works on much longer timescales than lifetimes. Other monkeys all also perform adherence to tradition rituals and blind faith. See the "monkey ladder experiment". It is generally beneficial to adhere to blind trust, but there is also fringe benefits in the least valuable members of society retesting those values every so often.
 

ForestDweller

Well-known member
Joined
Feb 18, 2020
Messages
838
Points
133
(this means some new, non-transformers based non-backpropagation trained system of AI occurs),

My dude, all I need is the ability to create a writing style lora so I can tell the ai to mimic my style instead of the generic western fantasy style it loves to use. You don't need some new revolutionary tech for it.

But that doesn't exist so I'm stuck writing everything manually.

Ok maybe you need permanent long term memory too if you're writing some long series.

That and no censorship.
 

CharlesEBrown

Well-known member
Joined
Jul 23, 2024
Messages
4,565
Points
158
My dude, all I need is the ability to create a writing style lora so I can tell the ai to mimic my style instead of the generic western fantasy style it loves to use. You don't need some new revolutionary tech for it.A
At least one of them can do that - you explicitly tell it to write it "in the style of 'X'" - where "X" can be you, or a specific story you wrote, or a list of examples of multiple things you wrote.
Over on Substack, a guy who teaches a marketing class in real life did a multi stage experiment.
1. For his class, he had AI create a movie poster for a film that didn't exist, in the style of a 40s noir, with some specific parameters, and he had his class write ad copy based on that picture.
2. Then, taking their ad copy he had the AI create a summary of the movie from their words and the existing movie poster.
3. Then, since he had been a film critic before he retired to teaching, he had AI write a review for this movie that didn't exist, in the style of his old move reviews.
Except for the fact that the AI had to be positive about everything - no matter how negative it started in some comments, everything wound up being upbeat and positive - it looked EXACTLY like some of his later reviews for movies that did exist, even the same structure of "overview, comment about director, comment about leads, describe a few scenes and what did or didn't work, and then provide overall assessment" pattern he followed.
But that doesn't exist so I'm stuck writing everything manually.

Ok maybe you need permanent long term memory too if you're writing some long series.

That and no censorship.
 

Tyranomaster

Guy who writes stuff
Joined
Oct 5, 2022
Messages
746
Points
133
My dude, all I need is the ability to create a writing style lora so I can tell the ai to mimic my style instead of the generic western fantasy style it loves to use. You don't need some new revolutionary tech for it.

But that doesn't exist so I'm stuck writing everything manually.
Either your understanding of the limitations of transformer models is incomplete, or your writing is bad enough that it is already considered AI slop done by hand, because a single LoRA isn't going to fix the issue. The shortfalls of transformers models for longform writing and thinking are too steep of a hill to overcome to fix the issue. Grok4, GPT3o and any other 100+B parameter models show this to be the case.

I've got numerous coding projects that I've put on the backburner because I don't think that transformers models can actually handle what I want appropriately as a backend service to provide adequate thinking, reasoning, and through-lines for maintaining coherence. Fundamentally, if you want AGI, it isn't going to come from an even larger transformers model. You need something with self adjusting feedback (you need there to be a handful of nodes on each layer of the transformers model that actually feed backwards into the previous layers to allow floating parameter adjustment and internalized summarization.) The issue is that if you try to implement such a system, you can no longer train the model efficiently, because you can no longer train and adjust the static parameters with backpropagation training methods.

Within a regular transformers model, that means any content that sits outside of a singular attention window is too much. GPT and other models that "Claim" to use 100k token context (just like local mistral claims to have 32k), use floating context, which isn't the same thing. Independent claims show that GPT probably also uses 16k real context, with floating context. You can't really just size up context to whatever size you want, because that is more nodes that need to be trained, more memory to operate on, etc etc.

Even if the true context window was large enough, you have an immediacy and guiding problem. If I just ramble and talk for an hour straight, it isn't going to be a novel that is guided correctly to follow good principles (rising action etc). You need controls overtop it that make it good at writing novels. If you wanted singular novels, it'd be possible for someone to make a transformers model to do that. It'd have to be exclusively trained on completed novels so that it embeds novel structure into its form. It'd need a huge actual context window, but could probably skimp out on layers to compensate (even for a short novel of 50k words, thats still a need for at least a 75k context window, and more likely 150k context, because you want to be able to give input as it writes). Even with that, you'd likely need a datacenter to train it and H100s to run it. Since all the big wigs in AI aren't interested in making a novel writer only, the funding chances of that are slim (but Elon could maybe be convinced to blow that kinda money on something no one really wants). The problem goes exponential the larger the model you try to train, that's why everything was advancing really quickly, and now the differences between the latest Claude, Grok, and GPT aren't nearly as obvious or large despite the fact that the models ballooned to even larger sizes.
Ok maybe you need permanent long term memory too if you're writing some long series.
You can already do this. Use RAG assisted and summarized context from the existing story. Prompt the main model with something like: These are the top 3 related sections from the ongoing story: (RAG search result 1, RAG search result 2, RAG search result 3). Then throw the rest of your prompt in. One project I've been debating going back to working on is exactly an author assistant that utilizes RAG retrieval so that you can search your own work more approximately (questions like: "What was the name of that elf character that has been hanging around?") and still get the chunk of text from your work so that you can find things that your memory is fuzzy of. I'd also add in that you could throw in paragraphs at a time and just see the most related items in your existing work, so you could context check it.

I'd advise watching 3Blue1Brown's playlist on Deep Learning which covers transformers like GPT. Believe me, the problem is actually quite complex to make a better system.
That and no censorship.
Head over to huggingface and pick up uncensored local models. There is actually a decently large community of people retraining models to perform specific tasks (which is necessary for things like writing, where the entire structure of the transformer model does need to change to compensate for the fact you no longer want short answers that necessarily have immediate context to what was just said). It won't actually get you what you're looking for, but there are uncensored models.
 

velvetvertigo

Member
Joined
Apr 26, 2025
Messages
33
Points
18
As a non-native English author, AI has been a huge help for me. Mostly with corrections. I know some people will say, “Why not just use Grammarly?” And sure, that’s an option. But while Grammarly or other similar tools can be used to correct grammar mistakes, that’s not enough anymore.

Especially, but not only, when you're writing in a language that’s not your own.

In my opinion, writing stories isn’t just about getting the grammar right. It’s about rhythm, tone, nuance, lexical choices. It’s about choosing the right word for the feeling you want to create and convey. If you’re not a native speaker, that’s hard. You’re always second-guessing yourself. And those old tools don’t help much with that.

This is one of the reasons why non-native authors often struggle to find their space. Especially in traditional publishing. Very few actually make it. And the ones who do, usually have a publisher behind them. They get professional translators, editors, a whole team. On your own, without that support, it’s almost impossible.

But now things are starting to change. With AI, people like me can finally catch up a bit. I’d never let an AI write an entire novel for me. That’s not the point. I'd lose the purpose of expressing myself, or the stories I'm creating in my mind before sharing them with the readers. But as a support tool for fixing language mistakes that I - as a non-native English speaker - can't find? Yes, for this I use AI. As a tool, like the OP said.

And I’m not just doing it for myself. I want readers to have something that’s nice to read. Something that feels polished, coherent and enjoyable. Thanks to AI, I can now reach a standard I couldn’t afford before. Not without spending money I just don’t have.

So yeah. AI is a tool. Nothing more. But for some of us, it’s finally a tool that helps level the playing field. I still write everything myself. The ideas, the structure, the voice, the dialogues, the scenes. All of this originates in my mind first, and I do not see how an AI could read my thoughts and convert them into words.

The AI just helps me get the language right. And that, for someone writing in a second language, is huge :)
 

ForestDweller

Well-known member
Joined
Feb 18, 2020
Messages
838
Points
133
At least one of them can do that - you explicitly tell it to write it "in the style of 'X'" - where "X" can be you, or a specific story you wrote, or a list of examples of multiple things you wrote.
Over on Substack, a guy who teaches a marketing class in real life did a multi stage experiment.
1. For his class, he had AI create a movie poster for a film that didn't exist, in the style of a 40s noir, with some specific parameters, and he had his class write ad copy based on that picture.
2. Then, taking their ad copy he had the AI create a summary of the movie from their words and the existing movie poster.
3. Then, since he had been a film critic before he retired to teaching, he had AI write a review for this movie that didn't exist, in the style of his old move reviews.
Except for the fact that the AI had to be positive about everything - no matter how negative it started in some comments, everything wound up being upbeat and positive - it looked EXACTLY like some of his later reviews for movies that did exist, even the same structure of "overview, comment about director, comment about leads, describe a few scenes and what did or didn't work, and then provide overall assessment" pattern he followed.
I tried it on the usual sites and it didn't work. And I know nothing about the open source solutions. Even for images I'm just using one of those gen sites. I'm not actually using my own computer to gen it.

As a non-native English author, AI has been a huge help for me. Mostly with corrections. I know some people will say, “Why not just use Grammarly?” And sure, that’s an option. But while Grammarly or other similar tools can be used to correct grammar mistakes, that’s not enough anymore.

Especially, but not only, when you're writing in a language that’s not your own.

In my opinion, writing stories isn’t just about getting the grammar right. It’s about rhythm, tone, nuance, lexical choices. It’s about choosing the right word for the feeling you want to create and convey. If you’re not a native speaker, that’s hard. You’re always second-guessing yourself. And those old tools don’t help much with that.

This is one of the reasons why non-native authors often struggle to find their space. Especially in traditional publishing. Very few actually make it. And the ones who do, usually have a publisher behind them. They get professional translators, editors, a whole team. On your own, without that support, it’s almost impossible.

But now things are starting to change. With AI, people like me can finally catch up a bit. I’d never let an AI write an entire novel for me. That’s not the point. I'd lose the purpose of expressing myself, or the stories I'm creating in my mind before sharing them with the readers. But as a support tool for fixing language mistakes that I - as a non-native English speaker - can't find? Yes, for this I use AI. As a tool, like the OP said.

And I’m not just doing it for myself. I want readers to have something that’s nice to read. Something that feels polished, coherent and enjoyable. Thanks to AI, I can now reach a standard I couldn’t afford before. Not without spending money I just don’t have.

So yeah. AI is a tool. Nothing more. But for some of us, it’s finally a tool that helps level the playing field. I still write everything myself. The ideas, the structure, the voice, the dialogues, the scenes. All of this originates in my mind first, and I do not see how an AI could read my thoughts and convert them into words.

The AI just helps me get the language right. And that, for someone writing in a second language, is huge :)
I find the usual ai writing style too purple prose for me. You can see certain words being used too much because most of the novels out there are written in a similar manner. So it can't really help me in that department either.
 
Top