Well anyway, I remember. It was the main character’s dead twin sister that kept resetting the loop with each failure. They were trying to save them, but things kept failing in different ways.
That is until the final loop when they decided on the one way to break the loop. Tragic game. :blob_blank:
That ending was when the characters ended up killing each other. Though, the demons forced the main’s contract to bind their body.
After they died that time, the next loop is the final one. Where they regain their memories, and break the loop.
An interesting game was Crystar. It is a loop game, where you have to beat the game I think three times before you unlock the final chapter.
Each time you beat the game, there is a difference in the prologue. Then you fast forward to Chapter 6 where differences take place.
One of the endings...
I accidentally locked the car door on my finger. Somehow missed all the concerning parts, and had time to think.
“Well, this is a predicament.”
Before the pain kicked in 20 seconds later. :blob_blank:
For the longest while, I have always concluded a story with the reveal of the character’s strongest ability.
Then I smile with great satisfaction that there is no NG+. :blob_evil_two:
Ya know, I looked back. And noticed something, the lion from Narnia the Lion the Witch and the Wardrobe. Is far better designed than The Lion King’s live action version.
Well, so it can’t code properly. It can’t engage with writing properly, it can’t search properly. It can’t even give recommendations for games properly.
So, what is the point of AI chatbots then? :blob_hmm_two:
1. Yes, they don’t learn in the traditional sense, but through pattern recognition. Depending on what is used to train their model, their next output is based on the expected result.
The failsafe though, was that if it does not have data on particulars. It was also designed to create new...
On the coding side of things, it is also bad. On top of the worse output, it summarizes, instead of giving all the information, which is critical for coders.
Been wondering why AI models have been hallucinating a lot more. Though, the results show. That late 2024. It was around a 16% or so chance to hallucinate.
In 2025 with the updates, it went to a 33% to 48% chance to hallucinate.
Results can be even worse if the model does not know how to...