This story raises many other questions too though.
When will we decide that an AI is self conscious and has rights?
When it can pass a Turing test?
Who is the arbiter of what "desirable" qualities an AI has to have?
Whoever makes it, which is a scary thought. :-)
At what point is a computer more well informed and suitable to pick a president than any of us?
How would we know we've reached that point?
When it shows the same wisdom, kindness and tolerance as the Minds in Iain M. Banks' "Culture" novels. Of course, the AIs will decide for themselves when it is time to reduce us to pampered and beloved but very much subordinate pets. Or perhaps just a nuisance to be swept aside.
But I am not worried. This will be a problem for future generations. I don't think we are within decades (or perhaps even centuries) of anything resembling HAL. I also suspect that we'll never be able to design such a thing. It will ave to be evolved, and when we have it, we'll understand it no better than we understand the human brain. Even worse, it might turn out to be subject to exactly the same weaknesses as us. In other words, it might turn out that there was no point at all, and that it would have been more effective to simply develop non-intelligent computers to work under close supervision of humans, as they do at the moment.
But no one can predict the future.