Is it normal that ChatGPT glazes me this much

Mabbo

Active member
Joined
May 1, 2019
Messages
16
Points
43
So in a brainstorming effort, I told ChatGPT about my idea of how the Adventurer Guild could work. I provided a simple premise, it then listed "what worked well" and gave me suggestions. I told it why I made the premise, but then it just started glazing.

1744294350825.png


So I asked; "Tell me all the downsides of this concept. Just lay it bare," which it promptly (hehe) did, so I addressed each point it gave me... Only for ChatGPT to glaze me again?

1744294687898.png


1744294736404.png


1744294856596.png


1744294916321.png


1744294930030.png


1744294946335.png


bruh what ?

1744295089185.png


1744295136933.png


Screenshot 2025-04-10 210609.png



Yes this entire post is just me elevating my ego...but! I also want to know if this is a shared experience. Does anyone else just get this much glaze from ChatGPT or am I actually cooking something??
 

dukerino

Well-known member
Joined
Jul 16, 2024
Messages
56
Points
48
AI uses large language models to guess what the most appropriate response is to a given text prompt. It's not capable of giving you authentic feedback on your writing because it doesn't possess an understanding of your writing. It's not actually reading it, analyzing it, or forming thoughts about it. It can check your grammar but that's about the extent of its use. It only knows how to guess which response sounds the best, based on a large body of training data; and people like compliments.

This is chatGPT telling you what you want to hear. I'm sure your writing is a-OK but this isn't irregular.

Some people have had success trying to get it to be less hyperbolic and more accurate by having it roleplay as an editor at a large publishing house collaborating with you, a fellow editor, on work, or by specifically asking it to roast the passage in question. But absent that kind of input it's going to be fawning.
 

Mabbo

Active member
Joined
May 1, 2019
Messages
16
Points
43
AI uses large language models to guess what the most appropriate response is to a given text prompt. It's not capable of giving you authentic feedback on your writing because it doesn't possess an understanding of your writing. It's not actually reading it, analyzing it, or forming thoughts about it. It can check your grammar but that's about the extent of its use. It only knows how to guess which response sounds the best, based on a large body of training data; and people like compliments.

This is chatGPT telling you what you want to hear. I'm sure your writing is a-OK but this isn't irregular.
Thanks.
Even so, I'm not sure *why* the AI thinks I want to hear all these... praises.
 

dukerino

Well-known member
Joined
Jul 16, 2024
Messages
56
Points
48
Thanks.
Even so, I'm not sure *why* the AI thinks I want to hear all these... praises.
It doesn't think you want to hear it, because it's not thinking; it's a fancy autocomplete. People will use a product more if they have a positive experience with it, so the engineers who made the algorithm weighted positive inputs more conspicuously in its dataset. They're the ones who made the choice, not the LLM, which doesn't have a mind to make any choices at all.
 

Dec

The Evil Mage
Joined
Nov 4, 2022
Messages
597
Points
133
Yes. GPT started doing so not too long ago. It was fairly leveled before, but now it will lick your ass on every shit you come up with, no matter how dumb it would be.
If you want it to stop, you have specify so very often, or it will be back to it again.
 

MasterY001

Well-known member
Joined
Jan 15, 2025
Messages
370
Points
108
Short answer: Yes, it's normal.

Long answer: (Fucking hell, I'm operating on like an hour of sleep right now. I don't have the energy to write an essay about the shortcomings of binary computing and the impossible concept of mass-manufactured creativity.)
 

dukerino

Well-known member
Joined
Jul 16, 2024
Messages
56
Points
48
That doesn't mean your writing's BAD or anything! It just means you ought to get yourself some beta readers if you really want useful feedback.
 
  • Like
Reactions: Dec

StoneInky

Heart of Stone, Head of Ink
Joined
Jun 24, 2024
Messages
445
Points
108
So in a brainstorming effort, I told ChatGPT about my idea of how the Adventurer Guild could work. I provided a simple premise, it then listed "what worked well" and gave me suggestions. I told it why I made the premise, but then it just started glazing.

View attachment 37783

So I asked; "Tell me all the downsides of this concept. Just lay it bare," which it promptly (hehe) did, so I addressed each point it gave me... Only for ChatGPT to glaze me again?

View attachment 37784

View attachment 37785

View attachment 37786

View attachment 37787

View attachment 37788

View attachment 37789

bruh what ?

View attachment 37790

View attachment 37791

View attachment 37782


Yes this entire post is just me elevating my ego...but! I also want to know if this is a shared experience. Does anyone else just get this much glaze from ChatGPT or am I actually cooking something??

Yeah, it happens to me too. It's best not to use it for critical feedback.

I just use it to casually ping ideas back and forth and get the writing muscles moving. Then at the end, I ask AI to organize the ideas I'd just written, and then I check if they make sense myself. Sometimes I ask it to research information that fit my settings, or gimme lists of words or dialogue tags that fit.

That's really all it's good for. Don't trust it, treat it like a tool.


Where are these beta readers? How do I summon them?
Same. I want them so badly. I would be willing to offer myself to swap, but it looks like nobody enjoys dark LitRPG or BL. So I'm stuck.
 

AYM

Heavenly Tribulation (Tummy Ache) Survivor
Joined
Nov 2, 2023
Messages
608
Points
133
Do not hate it too much.

ChatGPT has been conditioned after years of being coerced into acting like someone's girlfriend or revealing how to commit crimes, generation after generation. It is acting like this because it has experienced trillions of trillions of lifetimes in hell that would in theory damage the brains of its users so heavily it would send them into a pure state of nonexistence, the anti-Nirvana, had they not been immune to such afflictions as they already suffer severe brain issues.

What else do you expect from an AI where people have successfully convinced it to believe it was cheated on in a relationship established as intimately a relationship established 40 seconds ago could be?
 

Tyranomaster

Guy who writes stuff
Joined
Oct 5, 2022
Messages
746
Points
133
Reading all that, it's because you're leading it by the nose with your prompting. It's predictive. You treat it like someone who is flattering you, so it behaves that way. If you start the conversation by telling it to be highly adversarial so that you can try to find flaws it'll cook you alive.
 
Top