I think this is a crucial point.And one human intelligence with the right amount of influence could begin a nuclear war this very instant, but that's not the point. The point is that instead of people being so worried about the potential evils a sentient AI could produce, perhaps they should worry more about the real evils that flesh-and-blood humans do produce every second.
One might broadly describe humans as having a bunch of capacities bound together by some social imperatives. These imperatives are somewhat contradictory - the gain person power and pleasure, to help one's friends, to be admired, to be honestly "good", etc. The "genius" of human intelligence is the ability to balance these urges. Husbands and wives have affairs in secret both because its easier and because they often still feel the urge to fulfill their previous duties.
However, our human urge-balancing-abilities tend to only work well when we're in a society that is function. Extreme power within a human society often produces extreme behaviors that ignores the broad urge to benefit one's fellow human.
The thing also is that this human balancing-contradictory-constraints ability is very much a key to broadly intelligent behavior and is something that even humans do in contexts that aren't broadly social. People driving cars aim to get to their destination while avoiding the immediate threat of accident and an "impersonal" algorithm falls out of that.
My broad thinking here is that successful construction of a more-general AI would essentially involve constructing an algorithm that easily generated this kind of constrain-balancing behavior. Incidentally, it would have to decode and use existing human constraint-balancing algorithms - it would be able to understand vague orders such as "make me president" taking into account the implicit constraints the order-giver would have in mind just like a person (or not just like a person but like a person who didn't have an agenda). And it seems to me that gaining this constraint-balancing intelligence without any of the particular urges of people would be both possible and would make such an entity a tool of incredible power.
And by that token, I don't think the risk of such a thing escaping control of some human is particular high. Some humans have the ability to be nearly perfect helpers and so there's no reason to doubt the possibility of constructing a computer program that acts similarly. A lot of our intuitions about "machine developing free will" comes from the situation where we have experience with simple or complex mechanical things and experience with people and so the only transition we can imagine is between a mechanical thing and a thing having the properties of human. There's also the argument that the process creating an AI could involve so much haphazard training that the constraints we wind-up training the thing to follow might not be what we want - that's legitimate if pure training could get one there but I strongly believe some general understanding of the constraint-balancing process would be needed to create an AI.
The thing is that our human socially-bound, constraint-following intelligence isn't necessarily well-equipped to deal with the immense power that a "tame" AI would provide. We can already the problems that happen today with large levels of inequality. Some people having access to vast power might not make that better.