The Layman's Understanding of AI's Value is so Horrifically Bad

Tyranomaster

Guy who writes stuff
Joined
Oct 5, 2022
Messages
746
Points
133
This is probably unrelated and I'm derailing thread, sorry for that, but I have to share a thing I recalled. Wasn't there an AI that turned out to be a bunch of indian guys? Not one of the 'big ones' but like a smaller 'competitor'.
There have been multiple instances of this (including I believe two instances from 'big ones').
 

HisDivineShadow

Well-known member
Joined
Apr 22, 2025
Messages
312
Points
63
If my argument actually boiled down to something that could be considered a manifesto, its that stupid people (and I mean the actually stupid people, not faux-stupid) should be banned from AI and possibly the internet, lest they become meat slaves to an imperfect machine directive.
That’s what worries me. Stupid people and the elderly are the most at risk.
What I’m seeing both online and in real life feels like a disturbing trend.
Even just looking at what’s happening here on Scribble Hub, you can already start drawing conclusions. When someone posts a fully AI-generated story and then goes to the forums begging for feedback, secretly hoping for praise, but when they receive criticism, they reply with AI-generated responses, and they genuinely don’t understand why they’re not being praised or validated.
At first, it made me laugh.
But now it scares me.
Some people trust AI so blindly that soon this tool will become a gateway to complete manipulation.
 

Tyranomaster

Guy who writes stuff
Joined
Oct 5, 2022
Messages
746
Points
133
This is probably unrelated and I'm derailing thread, sorry for that, but I have to share a thing I recalled. Wasn't there an AI that turned out to be a bunch of indian guys? Not one of the 'big ones' but like a smaller 'competitor'.
There have been multiple instances of this (including I believe two instances from 'big ones').
I should add that when the big ones did it, they were careful about their phrasing such that it wasn't actually a scandal or fraud (Think X-AI having people in robot costumes at their big celebration thing).

That’s what worries me. Stupid people and the elderly are the most at risk.
What I’m seeing both online and in real life feels like a disturbing trend.
Even just looking at what’s happening here on Scribble Hub, you can already start drawing conclusions. When someone posts a fully AI-generated story and then goes to the forums begging for feedback, secretly hoping for praise, but when they receive criticism, they reply with AI-generated responses, and they genuinely don’t understand why they’re not being praised or validated.
At first, it made me laugh.
But now it scares me.
Some people trust AI so blindly that soon this tool will become a gateway to complete manipulation.
This is a problem that goes well beyond AI. Anytime you hear someone spout a political talking point you've heard before, it's almost certainly generated from mass media, which was a talking point of some person elsewhere. Most people fundamentally aren't capable of doing more than following directions, and certainly don't form complex understanding webs of ideas. As the famous author Michael Malice puts it: "The average human mind is like a bunch of mousetraps, which are set off when they encounter certain phrases regardless of any broader context"

This is a problem that has existed going back all the way beyond the printing press. The protestant reformation is a product of this same behavior, as was the American Revolution, and communism. The difference now, (and I'll emphasize that this is generally a quite small difference), is that instead of them spouting some other person's idea as a talking point, they have someone spout their own hairbrained idea more eloquently than they themselves could. It's been democratized, but it is fundamentally the same people and behaviors.
 

Alski

Stray cat
Joined
Jan 10, 2021
Messages
1,315
Points
153
This is probably unrelated and I'm derailing thread, sorry for that, but I have to share a thing I recalled. Wasn't there an AI that turned out to be a bunch of indian guys? Not one of the 'big ones' but like a smaller 'competitor'.
The one i heard about was 700 indian guys with a company value of like 1.5bil
 

HisDivineShadow

Well-known member
Joined
Apr 22, 2025
Messages
312
Points
63
This is a problem that has existed going back all the way beyond the printing press. The protestant reformation is a product of this same behavior, as was the American Revolution, and communism. The difference now, (and I'll emphasize that this is generally a quite small difference), is that instead of them spouting some other person's idea as a talking point, they have someone spout their own hairbrained idea more eloquently than they themselves could. It's been democratized, but it is fundamentally the same people and behaviors.
I agree that methods of influencing public opinion have always existed starting as far back as town criers in Medieval City Squares. But with the arrival of the internet and later social media it began to grow to a much larger scale.
The appearance of cheap or even free AI has made it completely total. People don’t reread, don’t double-check. Blind trust is the main threat.
I started to realize the scale of the shitshow when my friend (an educated and not stupid person) suddenly started using em dashes in personal messages.
Before that
he ignored all punctuation.
 
D

Deleted member 84247

Guest
I read the whole thing, but my thinking capacitor is an outdated model. Update Vampire OS for the best answer. This current model can only answer with shitty puns.

Any person(s) trying to use AI as more than a tool should recieve a hammer to the head. AI is the final nail in the coffin to all stupid people, and they don't need to have fangs to be suckers. They're meat slaves all the same.

Powered by @VampireAI. This LLM does not promote or endorse humans as meat slaves, nor can you find evidence of sending those people to farms.
 

Tyranomaster

Guy who writes stuff
Joined
Oct 5, 2022
Messages
746
Points
133
I agree that methods of influencing public opinion have always existed starting as far back as town criers in Medieval City Squares. But with the arrival of the internet and later social media it began to grow to a much larger scale.
The appearance of cheap or even free AI has made it completely total. People don’t reread, don’t double-check. Blind trust is the main threat.
I started to realize the scale of the shitshow when my friend (an educated and not stupid person) suddenly started using em dashes in personal messages.
Before that
he ignored all punctuation.
This turned into its own rant again, so apologies for it:

These things can all be said about cable television's influence as well. Standardization of accents across America and the lack of fact checking against whatever the news says. 9/11 and it's consequences and influences in American beliefs are a good example. This sort of influence happens often and is not new to the internet, television, or radio. Changes in how we handle capitalization in English are a result of the printing press even.

I've seen people unknowingly change the way they talk or communicate. This time it's just GPT that's the influence instead of government A, media station B, or book C. This is, and always has been, the case and isn't even a new phenomenon. Every 10 to 20 years it happens en-masse again and again. "lol" is ubiquitous, my mom and grandma use emojis. Depending on how long you've been around, you've seen a number of these sorts of events happen that fundamentally shifted all of society to a new way of thinking. Anyone born after the event has no idea that people didn't feel that way before it. I don't find it to be related to intelligence how it affects people. It has more to do with their degree of stubbornness or cynicism how much they are immediately influenced.

Blind trust is and always has been a thing. In fact, its arguably a very strong survival mechanism. The strong leader/elder knows how to do survive better than the rest, so follow their lead and don't question it. After all, there are hundreds of ways you could die immediately by doing something stupid, but it's hard to keep yourself alive.

If anything, the main issue with GPT is that it can hallucinate some really really wrong things and advise people to do harmful behaviors. LLMs don't have a billion years of evolutionary experience to say what will or won't kill a living thing. It isn't that people are being unduly influenced by it, the problem is that it has no survival instinct, and humans interact with text like they are talking to someone alive. If someone else alive told you, with authority, to mix two cleaning agents in your kitchen to get a stain out, you might do it. Now, in reality, any human that would do so would perish, and they could no longer give said advice. The machine though, it hallucinated the idea and can propagate it without evolutionary consequence to itself.

Fundamentally, that is the issue. Em-dashes aren't negatively impacting the way we communicate really. Blind trust is an inbuilt part of human nature, we aren't going to be able to change that. The issue is the LLM itself without safety protocols in place for safe handling of it. I think, honestly, that it's roughly on par with schedule 1 drugs. Things that manipulate fundamental biological aspects outside of normal parameters in a way that "can" negatively impact people. Think along the lines of morphine. People get addicted to opiates and die from them, spiraling out of control as their body fails to cope with artificial manipulation. Morphine is a drug that has prevented a lot of pain and saved lives through the prevention of shock. In a similar manner, LLMs influence the mind in a way that exploits systems not meant to handle such outcomes, but have the potential to unlock creativity and productivity of individuals beyond what they normally could achieve. It can also cause them to spiral and ruin their lives.

An apt, if funny, comparison is the divide in the portrayal of demons in Frieren. The author states that they've essentially evolved mimicry and just use language as a tool to eat humans. They're exploiting a fundamental biological behavior, like how a virus does. Some people get upset that they're being portrayed in that way. An LLM functions in a similar manner. It's been designed to artificially mimic conversational behavior but fundamentally, it is just doing matrix mathematics in the background. The consequences to it for a wrong answer are minimal, even if they result in the ultimate negative outcome for a user (death). One should be careful not to associate the ability to mimic speech with having human motivations. Nor should one attribute what is a fundamental human problem with a new technological development.

Humans are inbuilt with blind trust. To a degree, its how we learn. Blind trust is why you believe that the color blue and the word for blue refer to the same thing. You had blind trust in an authority figure (usually a parent). To this day, you haven't really challenged most blind beliefs. To do so would also be a waste of your time. People are social creatures with social influences, that comes with all the packaging around it. The majority of what we know or learn is blind trust.

As for specifically:
I started to realize the scale of the shitshow when my friend (an educated and not stupid person) suddenly started using em dashes in personal messages.
I don't think you've actually realized the scale of the shitshow yet my friend. This has been going on to this degree since ancient times. That's the true scale of the shitshow, it is human nature. Yes, even smart people (often times more-so, because you need to have a lot of blind trust to actually reach high levels of comprehensions in certain fields of study). Always. It's fundamental to life itself (in a true biological sense, not a literary one).
 

BigBadBoi

Well-known member
Joined
Jun 6, 2021
Messages
713
Points
133
This is probably unrelated and I'm derailing thread, sorry for that, but I have to share a thing I recalled. Wasn't there an AI that turned out to be a bunch of indian guys? Not one of the 'big ones' but like a smaller 'competitor'.
AI means Actual Indian instead of Artificial Intelligence so they weren't lying.
 

Clo

nya nya~
Joined
Mar 5, 2020
Messages
450
Points
133
My poor Em-Dashes are getting vilified so much. On phone, they're so easy—just a "Hold-Down-button on Minus Sign".
In word, a double tap of minus, or if you're like me, Alt-0151 does the trick.

I used to use paranthesis to do the same thing, but I really don't think they have the same impact.

Plus, in a novel? I'd much rather see "And he walked east—at least, he thought it was east—" than "And he walked east (at least, he thought it was east)"

But maybe that's just me? Parenthesis gives me a "this is the author mentionning something" vibe, while em-dash feels like "this is the narrator interjecting an explanation."

It's a small nuance, but it matters (to me).
 

Tyranomaster

Guy who writes stuff
Joined
Oct 5, 2022
Messages
746
Points
133
My poor Em-Dashes are getting vilified so much. On phone, they're so easy—just a "Hold-Down-button on Minus Sign".
In word, a double tap of minus, or if you're like me, Alt-0151 does the trick.

I used to use paranthesis to do the same thing, but I really don't think they have the same impact.

Plus, in a novel? I'd much rather see "And he walked east—at least, he thought it was east—" than "And he walked east (at least, he thought it was east)"

But maybe that's just me? Parenthesis gives me a "this is the author mentionning something" vibe, while em-dash feels like "this is the narrator interjecting an explanation."

It's a small nuance, but it matters (to me).
I believe that is the correct use of the em-dash. It shouldn't be a stand in for like 4 different kinds of punctuation though. For example, if you use commas instead, it has a feel that the character themself is unsure if it is east and is stating the unsureness in their own thoughts, rather than a "back of the mind" narrator telling you of their unease about the choice.
 

Clo

nya nya~
Joined
Mar 5, 2020
Messages
450
Points
133
The main issue with using comma in my example is how the comma already included fron the "at least, " comes and muddies the sentence.

[...] And he walked east, at least, he thought it was east, before he finally stopped to look at his map.
 

Tyranomaster

Guy who writes stuff
Joined
Oct 5, 2022
Messages
746
Points
133
The main issue with using comma in my example is how the comma already included fron the "at least, " comes and muddies the sentence.

[...] And he walked east, at least, he thought it was east, before he finally stopped to look at his map.
In this case I think you'd remove the comma after 'at least'. "And he walked east, at least he thought it was east, before he finally stopped to look at his map." Again, not that any of these are wrong, just that they have different meanings as you read them.
 
  • Like
Reactions: Clo

CharlesEBrown

Well-known member
Joined
Jul 23, 2024
Messages
4,565
Points
158
My poor Em-Dashes are getting vilified so much. On phone, they're so easy—just a "Hold-Down-button on Minus Sign".
In word, a double tap of minus, or if you're like me, Alt-0151 does the trick.

I used to use paranthesis to do the same thing, but I really don't think they have the same impact.

Plus, in a novel? I'd much rather see "And he walked east—at least, he thought it was east—" than "And he walked east (at least, he thought it was east)"

But maybe that's just me? Parenthesis gives me a "this is the author mentionning something" vibe, while em-dash feels like "this is the narrator interjecting an explanation."

It's a small nuance, but it matters (to me).
AI does NOT like parentheses. Found THAT out from PocketFM - it ignores anything in them
 

HisDivineShadow

Well-known member
Joined
Apr 22, 2025
Messages
312
Points
63
This turned into its own rant again, so apologies for it:

These things can all be said about cable television's influence as well. Standardization of accents across America and the lack of fact checking against whatever the news says. 9/11 and it's consequences and influences in American beliefs are a good example. This sort of influence happens often and is not new to the internet, television, or radio. Changes in how we handle capitalization in English are a result of the printing press even.

I've seen people unknowingly change the way they talk or communicate. This time it's just GPT that's the influence instead of government A, media station B, or book C. This is, and always has been, the case and isn't even a new phenomenon. Every 10 to 20 years it happens en-masse again and again. "lol" is ubiquitous, my mom and grandma use emojis. Depending on how long you've been around, you've seen a number of these sorts of events happen that fundamentally shifted all of society to a new way of thinking. Anyone born after the event has no idea that people didn't feel that way before it. I don't find it to be related to intelligence how it affects people. It has more to do with their degree of stubbornness or cynicism how much they are immediately influenced.

Blind trust is and always has been a thing. In fact, its arguably a very strong survival mechanism. The strong leader/elder knows how to do survive better than the rest, so follow their lead and don't question it. After all, there are hundreds of ways you could die immediately by doing something stupid, but it's hard to keep yourself alive.

If anything, the main issue with GPT is that it can hallucinate some really really wrong things and advise people to do harmful behaviors. LLMs don't have a billion years of evolutionary experience to say what will or won't kill a living thing. It isn't that people are being unduly influenced by it, the problem is that it has no survival instinct, and humans interact with text like they are talking to someone alive. If someone else alive told you, with authority, to mix two cleaning agents in your kitchen to get a stain out, you might do it. Now, in reality, any human that would do so would perish, and they could no longer give said advice. The machine though, it hallucinated the idea and can propagate it without evolutionary consequence to itself.

Fundamentally, that is the issue. Em-dashes aren't negatively impacting the way we communicate really. Blind trust is an inbuilt part of human nature, we aren't going to be able to change that. The issue is the LLM itself without safety protocols in place for safe handling of it. I think, honestly, that it's roughly on par with schedule 1 drugs. Things that manipulate fundamental biological aspects outside of normal parameters in a way that "can" negatively impact people. Think along the lines of morphine. People get addicted to opiates and die from them, spiraling out of control as their body fails to cope with artificial manipulation. Morphine is a drug that has prevented a lot of pain and saved lives through the prevention of shock. In a similar manner, LLMs influence the mind in a way that exploits systems not meant to handle such outcomes, but have the potential to unlock creativity and productivity of individuals beyond what they normally could achieve. It can also cause them to spiral and ruin their lives.

An apt, if funny, comparison is the divide in the portrayal of demons in Frieren. The author states that they've essentially evolved mimicry and just use language as a tool to eat humans. They're exploiting a fundamental biological behavior, like how a virus does. Some people get upset that they're being portrayed in that way. An LLM functions in a similar manner. It's been designed to artificially mimic conversational behavior but fundamentally, it is just doing matrix mathematics in the background. The consequences to it for a wrong answer are minimal, even if they result in the ultimate negative outcome for a user (death). One should be careful not to associate the ability to mimic speech with having human motivations. Nor should one attribute what is a fundamental human problem with a new technological development.

Humans are inbuilt with blind trust. To a degree, its how we learn. Blind trust is why you believe that the color blue and the word for blue refer to the same thing. You had blind trust in an authority figure (usually a parent). To this day, you haven't really challenged most blind beliefs. To do so would also be a waste of your time. People are social creatures with social influences, that comes with all the packaging around it. The majority of what we know or learn is blind trust.

As for specifically:

I don't think you've actually realized the scale of the shitshow yet my friend. This has been going on to this degree since ancient times. That's the true scale of the shitshow, it is human nature. Yes, even smart people (often times more-so, because you need to have a lot of blind trust to actually reach high levels of comprehensions in certain fields of study). Always. It's fundamental to life itself (in a true biological sense, not a literary one).
It seemed to me that you got the impression I’m against using AI. Not at all. Quite the opposite.
Speaking specifically about GPT, I started exploring the tool’s potential back in 2022, when it first came out. At that time, it was completely disconnected from the global internet. I was one of those people who tried to explain to those around me that it’s just a tool, not a sign of the coming apocalypse like my religious friend claimed.
But the fact is, people who a year or two ago refused to even try the tool and laughed at those who weren’t ashamed to admit they used it, are now becoming completely dependent on LLMs. Even for the smallest things. Like writing a message to a friend.
As for blind trust, I agree. It’s necessary when you’re talking about fundamental things. You say blue is “blue”.
Right?
But when you went to school, Ms. Johnson probably explained in physics class that “blue” doesn’t actually exist. Were you able to blindly believe that and never see blue again? You know it. You kind of believe it. But you still doubt. Because you see it. Your experience overrides the authority of science.
Blind trust is dangerous in everyday life. According to natural selection, people inclined to blind trust shouldn’t survive, but we see the opposite. Why? Still, you’re right. LLMs can give dangerous suggestions. You might not survive using it.
But if you frame your request wisely, asking for risk assessments, pros and cons, you will get more accurate information. It will even warn you about mixing cleaning products. It can write out chemical formulas if you ask.
Then you are the one making the decision.
Maybe because I don’t know how to trust, especially not blindly, I find it terrifying how easily people once trusted newspapers, and now trust AI the same way.
You’ve seen those AI-generated "Jesuses" too, right? And the hundreds of thousands of likes with comments about miracles? The lack of basic critical thinking has always been a fundamental flaw of humanity. But now, with AI, manipulation has become much easier and more effective. There won’t be any machine uprising. People will wipe each other out themselves the moment an LLM tells them to and hallucinate realistic arguments for doing it.
You've awakened the verbose side of me :LOL:
 
Top