Wednesday, January 22, 2014

Re: Singularity Destruction of Mankind or Blessing to Humanity?

R. wrote: "All things require a purpose. In the case of AI, that purpose is defined in a subroutine that is called to validate all problem resolutions. How that purpose is delineated and how the AI interprets it most likely will determine the AI's interaction with humans. This creates two sets of variables, programming and the execution of that programming. Assuming that AIs do not develop emotions, logic dictates that we will be perceived as an impediment, an integral part, or irrelevant to the AI's purpose. If we are irrelevant, it seems logical that the AI would do nothing intentionally deleterious to us. If we are an integral part of the AI's purpose, it follows that it should act to protect us as it would itself. So that leaves the question of what would happen, if we pose an impediment to its purpose. If the AI represents a superior intelligence, it is likely that it would determine and execute a sophisticated approach to neutralizing any threat we represent to its purpose. A logical element of this approach would be that we would remain unaware of its existence and intent. This suggests political manipulation. So I believe that, no matter how the AI perceives us, it would seem to us that the AI is benign. 

This theory has one potential flaw, and it is a big one. What if the AI goes insane? This is not an unlikely possibility. Having worked for many years in the computer software industry, I dealt with a lot of insane code. All it takes is a miniscule bug. (For example, failing to branch out of a routine before executing a section of code intended to address a different circumstance.) Once a bug is introduced into the code, all bets are off. It can cause a program to behave in a more incomprehensible manner than does any human schizophrenic." 

As you pointed out, even if the AI is following its original hand coded program directives, as its intelligence grows it could find us as an impediment to its purpose. Even worse some researchers fear that such an AI might engage in self optimizing reprogramming and in the process rewrite its directive as well. Then there is what you also talked about, not only having to worry about bugs introduced by the original programmers, but now the AI itself as it recursively optimizes its software.

No comments: