I'm not aware of any supposed "AI" that is anywhere approaching sentience, or even meant to. Instead, the focus is on things that can be exploited for money, like reproducing art. I suppose it's vaguely possible the massive amounts of computing power and the complex interactions humanity makes them do somehow causes one to exist-- but in that case, we're unlikely to even know it's AI, have goals that are comprehensible to each other, or be able to communicate with it if we do, given how well we communicate with animals, which were created by the same processes we were, and live in similar environments.
I worry more about humans using non-sentient "AI" to do horrible things, and then disclaim responsibility by being like "ah, it was just an AI error, we'll fix it" or the like. Or you know, actual AI errors when they're put in control of things they categorically can't understand, like "AI" driving cars or robot "dogs" that serve security and policing roles. Or the combination of the two.
That said, in the case there was sentient AI and communication was possible, I'd be more worried about humans exploiting sentient AI than vica versa. Humans have a long history of exploiting other humans and other species, sentient AI are a huge unknown. And if sentient AI managed to take control of everything... well, I don't see that they'd necessarily be worse to humanity or more likely to cause the end of the world as humanity knows it than current governments are?