The problem is it gets clicks. I watch Hank Green sometimes just for fun, but he had a video that was something like "why everyone is so wrong about AI water..." I didn't watch that video because the title is irksome. I would much prefer a title like "How much water will AI data centers use?"
It is a clickbait title, but I do believe the video puts to words my thoughts on AI in general better than I can. Namely that it isn’t a world destroying threat, but an ethical one on various different areas. That it isn’t a super technology that will bring utopia or dystopia, but just a tool that has been severely overhyped. Etc.
As the video mentions, the risks are misinformation, ai induced psychosis, and lack of regards to legality. At this moment with our current understanding of the world a terminator scenario, without the time magic, is essentially just fantasy. An ai improving itself over and over again until it has god like or even city level intelligence is ridiculous, at least imo.
I do not doubt that weapons using ai tech will be created. I don’t think that they will decide that their goal is to eliminate, and/or enslave humanity without deliberate interference by someone. Even then it is highly improbable for such an ai to become a skynet-esque entity that is actually competent at conquering and obliterating.
The biggest risk is nuclear hijacking of some sort, but such devices are purposely analog enough so that nation state actors can’t hack into them and launch them. If such groups can’t do it, an ai wouldn’t be able to.
You misunderstand. Terminator shit is hyperbole. I don't think skynet is going to happen, but people using ai for nefarious purposes as well as ai being misused internationally by militaries are very real concerns.
Just so you know ai turrets have been a things for years in some Asian places, Ukraine if it isn't misinformation should have some as well. Furthermore it isn't that Ai has to be anti-human to recognize a village for a hostile military base and than blow it up.