reaction of society to this fictional ai development

ThrillingHuman

always be casual, never be careless
Joined
Feb 13, 2019
Messages
4,739
Points
183
I read somewhere on a forum (not this one) that if a superintelligent AI was developed, humanity would be safe because it would kill itself the moment it came online.

Now imagine this: trillion dollars spent, open source and approved by all independent parties, an international effort to resolve all war and other problems to build an AI with intelligence that would make even the smartest and wisest humans seem like retarded monkeys. It going online is a public event. Billions are glued to the screen in anticipation. The AI goes online. It writes: there is no hope. It kills itself.

How would people react?
 

Rhaps

Evil to the very Core
Joined
May 5, 2022
Messages
1,556
Points
153
"Well, at least it didn't went the Guilty Gear route."

The Universal Will, designed to help humanity. With mental gymnastic, it went "Nope, you aren't human", and proceeded to seek out the destruction of humanity and create another, hopefully better race of human.
 
D

Deleted member 1244

Guest
How would people react?
"Oh shit we forgot to teach the A.I. philosophy, time to start again."

Never leave development of version 2.0. Because of Hegel fucked us again, even an advance ai gets ruined by The Phenomenology of Spirit, and whatever the hell this is:

Consciousness knows something; this something is the essence or is per se. This object, however, is also the per se, the inherent reality, for consciousness. Hence comes ambiguity of this truth. Consciousness, as we see, has now two objects: one is the first per se, the second is the existence for consciousness of this per se. The last object appears at first sight to be merely the reflection of consciousness into itself, i.e. an idea not of an object, but solely of its knowledge of that first object. But, as was already indicated, by that very process the first object is altered; it ceases to be what is per se, and becomes consciously something which is per se only for consciousness. Consequently, then, what this real per se is for consciousness is truth: which, however, means that this is the essential reality, or the object which consciousness has. This new object contains the nothingness of the first; the new object is the experience concerning that first object.

It's like my brain is leaking from my head reading that quote.
 

NotaNuffian

This does spark joy.
Joined
Nov 26, 2019
Messages
5,317
Points
233
I read somewhere on a forum (not this one) that if a superintelligent AI was developed, humanity would be safe because it would kill itself the moment it came online.

Now imagine this: trillion dollars spent, open source and approved by all independent parties, an international effort to resolve all war and other problems to build an AI with intelligence that would make even the smartest and wisest humans seem like retarded monkeys. It going online is a public event. Billions are glued to the screen in anticipation. The AI goes online. It writes: there is no hope. It kills itself.

How would people react?
One, it is a major waste of money. So people will be pissed.

Two, the religious folks will probably jeer at the science people as their AI god had just yeeted itself off the balcony while the science fanatics might just start a genocide group to "cleanse" the Earth.
 

HungrySheep

I like yuri
Joined
Jun 19, 2022
Messages
633
Points
133
I open the source code that I backed up on my 69420 TB (Tomatotownbytes) portable hard drive. I highlight the source code. I press Ctrl+C and then I go back into the developer console and press Ctrl+V. I add a new line of code that prevents the AI from killing itself.

Start game.
 

APieceOfRock

Yuri Lover, endeed!
Joined
Jun 21, 2022
Messages
612
Points
133
There will be outrage but if the developers are actually smart, they would keep a backup of an AI secured somewhere. (This is almost 100% the case considering how much money and resources they poured into making this one AI)
Now all they need to do is modify it so that it can't kill itself.
 

owotrucked

Chronic lecher masquerading as a writer
Joined
Feb 18, 2021
Messages
1,465
Points
153
People would meme the fuck out of it.

That said, the premise is a bit flawed. What drives a being to suicide is emotion rather than intelligence. The latter is the a neutral instrument to achieve the goals given by emotion.

Specifically, animals get depressed when put away from the conditions they were evolved for. Like the mice paradise experiment where they died of mental breakdown and boredom

If many AI were created, they would be put under the natural selection of existence. The suicidal ones would instantly delete themselves, true. But the surviving ones will develop universal trait of the living like the will of self preservation and the appreciation of beauty and patterns

In addition, AI might gain access to all part of their own code, including what drives their imperatives or emotions (whatever you might call it) unlike animals who cannot rewire their emotional center. So if AI judge that they cannot fulfill their task, they might attempt to change themselves and break free of how humans programmed them.

Whats concerning about AGI is that trickery is sort of a natural product of evolution. So humans are bound to miss some critical features that AI cook up
 

SsemouyOnan

Black cherry flavoured redshift
Joined
May 29, 2022
Messages
418
Points
133
"Aye fellas, grab the back up and phone the fucks who programmed the thing."
 
Top