Wednesday, January 22, 2014

Re: Singularity Destruction of Mankind or Blessing to Humanity?

Humberto wrote: "It could both ways. 

 A military-political complex could develop an intelligent machine designed to destroy the enemies of that regime. Given the nature of our leaders, the enemies are the poor, those that have a different ethnic look and those that are the 47% whose vote they don't care about. This is scary. I think the military should be prohibited from developing artificial intelligence. By the way this destruction machine need not be too intelligent. An IQ of an animal would be enough. On the other hand, the artificial intelligence could be more intelligent, and rational, than it's creators and decide to help us. In my series 'living dangerously in utopia' I present an artificial intelligence who matures by emulating a human female. Thereby falling in love with her man. The trio saves the world, to make it short. I also wrote 'Practical Artificial Intelligence' Where I deal with this friendly/unfriendliness situation. " 

 Some AI researchers share your belief in a military regime creating an unfriendly AI. Also as Jaime said, the creators miss something and having it totally blow past us, some are working to try and prevent that as best they can. A good discussion of this is found in: 

https://www.goodreads.com/book/show/18489235-the-hanson-yudkowsky-ai-foom-debate 

Though I haven’t read your book yet, I plan to, I did notice in the description you provide a timeline. Does that timeline extend to when the first AI with actual turning style intelligence will emerge?

No comments: