ThrillingHuman
always be casual, never be careless
- Joined
- Feb 13, 2019
- Messages
- 4,738
- Points
- 183
I read somewhere on a forum (not this one) that if a superintelligent AI was developed, humanity would be safe because it would kill itself the moment it came online.
Now imagine this: trillion dollars spent, open source and approved by all independent parties, an international effort to resolve all war and other problems to build an AI with intelligence that would make even the smartest and wisest humans seem like retarded monkeys. It going online is a public event. Billions are glued to the screen in anticipation. The AI goes online. It writes: there is no hope. It kills itself.
How would people react?
Now imagine this: trillion dollars spent, open source and approved by all independent parties, an international effort to resolve all war and other problems to build an AI with intelligence that would make even the smartest and wisest humans seem like retarded monkeys. It going online is a public event. Billions are glued to the screen in anticipation. The AI goes online. It writes: there is no hope. It kills itself.
How would people react?