Thursday, January 23, 2014

Follow up to: Singularity Destruction of Mankind or Blessing to Humanity?

Assuming we agree that there is a potential for an AI to become a Singularity how far do you think we are form it happening? 

Bear in mind as I said in the other post I am not sure that NNs will be a candidate for the Singularity. That said one cannot ignore the recent explosion in potential seen with NNs. To name a few Watson, Deep Learning, Google’s self-driving car (though there are other things at work beyond NNs in this), and many other advances in machine learning have all pushed the field far ahead of what it was just two years ago. 

Perhaps the singularity will use a hybrid system that relies on NNs for some part of its processing function. It may even use several as slave processes to its primary function. So based on the discussions from the first post I am interested to know your thought as to when the Singularity might happen.

Wednesday, January 22, 2014

Re: Singularity Destruction of Mankind or Blessing to Humanity?

Humberto wrote: " First we must wait until at least 2045 for computer power to catch up with the level of a human mind. By then learning algorithms must be better. 

One interesting fact is that the complexity difference between a chimp and a human is measured In single digits. And the amount of information in DNA seems to be less than on a program like UNIX. 

It would be possible to start with a baby AI and let it mature, and more surprisingly to allow it to write its own programming. With a military machine, all that is needed is a level equivalent to a self driven car. And that technology is 5 years away, at most. 

I am optimistic and I think that the military have nothing to gain in a battle between two robotic armies. Not android soldiers, but drones, tanks and self driven cargo mules. It could be like a destruction derby." 

I would be interested to see what you are basing your calculations on. When you say computer power you are not talking about clock speed obviously since the human brain has about 100 Hz serial speed limit, and most computers are doing 2.5 GHz or better. So we must be talking processing power which also may not be as far away as you think, considering “Deep Rybka 3 on an Intel Core 2 Quad 6600 [processor] has an Elo rating of 3,202 on 2.4 billion floating-point operations per second.”(Swedish Chess Computer Association) Watson I am sure is right up there as well, with tuple operations and cognitive processing. As for self-driving cars Google has one that has driven hundreds of thousands of miles already including the streets of San Francisco. Stanley won the DARPA challenge in 2004, the company I was working for at the time came in second. 

If you consider by 2020 human knowledge will be doubling every 72 hours, and within the next two years chip sizes will be around ~7 nm, it could be a lot sooner. Moore‘s Law will hit the physical limit of silicon somewhere around 2020 but they already have another hybrid material as well as 3D stacking. Also I recently saw an announcement for a simulation control that lets them model ink saturation on printed boards for better accuracy. All this makes me think that it is becoming harder and harder to measure and predict such events, the paradigm is shifting too rapidly. 

By the way in my book MagTech I allude to brain augmentation by nanotechnology first starting around 2026. I think that in twelve years we could easily be there. While that is not necessarily dependent on the Singularity I think it could be a factor in it.

Re: Singularity Destruction of Mankind or Blessing to Humanity?

R. wrote: "All things require a purpose. In the case of AI, that purpose is defined in a subroutine that is called to validate all problem resolutions. How that purpose is delineated and how the AI interprets it most likely will determine the AI's interaction with humans. This creates two sets of variables, programming and the execution of that programming. Assuming that AIs do not develop emotions, logic dictates that we will be perceived as an impediment, an integral part, or irrelevant to the AI's purpose. If we are irrelevant, it seems logical that the AI would do nothing intentionally deleterious to us. If we are an integral part of the AI's purpose, it follows that it should act to protect us as it would itself. So that leaves the question of what would happen, if we pose an impediment to its purpose. If the AI represents a superior intelligence, it is likely that it would determine and execute a sophisticated approach to neutralizing any threat we represent to its purpose. A logical element of this approach would be that we would remain unaware of its existence and intent. This suggests political manipulation. So I believe that, no matter how the AI perceives us, it would seem to us that the AI is benign. 

This theory has one potential flaw, and it is a big one. What if the AI goes insane? This is not an unlikely possibility. Having worked for many years in the computer software industry, I dealt with a lot of insane code. All it takes is a miniscule bug. (For example, failing to branch out of a routine before executing a section of code intended to address a different circumstance.) Once a bug is introduced into the code, all bets are off. It can cause a program to behave in a more incomprehensible manner than does any human schizophrenic." 

As you pointed out, even if the AI is following its original hand coded program directives, as its intelligence grows it could find us as an impediment to its purpose. Even worse some researchers fear that such an AI might engage in self optimizing reprogramming and in the process rewrite its directive as well. Then there is what you also talked about, not only having to worry about bugs introduced by the original programmers, but now the AI itself as it recursively optimizes its software.

Re: Singularity Destruction of Mankind or Blessing to Humanity?

L.G. wrote: "The cynic in me suspects that things will turn out poorly. If such an entity were to exist, then we would, in many respects, be as far below it as ants are below us. How many of us care when we step on an ant? Not many I would wager, and I'd wager that such an entity would show a similar lack of care about us. 

Of course, we could try and build in some form of protection. But if it really is as advanced as speculation suggests, then it will almost certainly find some way around those protections. It will then become a moral question (or whatever passes for morals for such an entity) as to what to do with humanity. 

On the other hand, I strongly believe that any such entity should, or rather must, be socialised. That is, if it is raised by people to care about people, then it may turn out all right. 

Then again, this is all just speculation. I think we're still a ways off yet from this, but it is fun to speculate." 

 As I said in response to an earlier post the ant analogy I would disagree with, only because it has a vested interest in the survival of this planet for its own survival. But I agree with the rest of what you say, and I would hope that the developers of the system would find a way to instill some kind of moral judgment that would be preserved, regardless of its level of intelligence.

Re: Singularity Destruction of Mankind or Blessing to Humanity?

Micah wrote: "Sinjin wrote: "...such an entity will rapidly evolve a super-intelligence and as part of its utility function determine that we pose a threat, or are irrelevant and therefore useless to its needs.

 There's no way to predict the outcome unless you know what such an entity's needs actually are. Are its needs self-determined? Or are they a by-product of how it was created, intentionally or not? I could see such an entity just not giving a damn about us, and then gobbling up all the processing power, networking and communications infrastructure of the world for its own use, thus depriving us of all those resources and driving us back into a pre-computer age. Kind of the "humans gobble up the entire planet's ecosystem" scenario to the detriment of all other flora/fauna. But then you may have limiting factors such as can this essentially software entity continue existing without the physical means of maintaining, repairing and/or expanding its hardware side? I.e., does the AI possess any kind of physical agency that could circumvent the need for humans altogether? If not, you may end up with a symbiosis of sorts where humans bargain to maintain some processing, communication and networking power in return for servicing the AI's physical resource needs. There are just too many imponderables without spelling out the initial assumptions of the AI's nature."..."

That view seems to be the general view shared by a lot of AI researchers.

Re: Singularity Destruction of Mankind or Blessing to Humanity?

Humberto wrote: "It could both ways. 

 A military-political complex could develop an intelligent machine designed to destroy the enemies of that regime. Given the nature of our leaders, the enemies are the poor, those that have a different ethnic look and those that are the 47% whose vote they don't care about. This is scary. I think the military should be prohibited from developing artificial intelligence. By the way this destruction machine need not be too intelligent. An IQ of an animal would be enough. On the other hand, the artificial intelligence could be more intelligent, and rational, than it's creators and decide to help us. In my series 'living dangerously in utopia' I present an artificial intelligence who matures by emulating a human female. Thereby falling in love with her man. The trio saves the world, to make it short. I also wrote 'Practical Artificial Intelligence' Where I deal with this friendly/unfriendliness situation. " 

 Some AI researchers share your belief in a military regime creating an unfriendly AI. Also as Jaime said, the creators miss something and having it totally blow past us, some are working to try and prevent that as best they can. A good discussion of this is found in: 

https://www.goodreads.com/book/show/18489235-the-hanson-yudkowsky-ai-foom-debate 

Though I haven’t read your book yet, I plan to, I did notice in the description you provide a timeline. Does that timeline extend to when the first AI with actual turning style intelligence will emerge?

Re: Singularity Destruction of Mankind or Blessing to Humanity?

CJ wrote: "I saw the new movie Her recently and it beautifully addressed this topic. Definitely worth watching!" 

I haven’t seen the movie yet, there was some mention of in it an article I read though… 

http://io9.com/can-we-build-an-artificial-superintelligence-that-wont-1501869007

Re: Singularity Destruction of Mankind or Blessing to Humanity?

Jaime wrote: " Given the tendency of complex systems to give rise to unintended consequences, the cynic in me expects one or more of those safeguards to be insufficient, or humanity completely misses some subtle aspect of the on-coming Singularity so that it fully manifests in a way that takes most everyone by surprise. If we take 'Singularity' to mean the notion - as defined by Vernor Vinge - of a development beyond which the future course of technological progress and human history is unpredictable or even unfathomable, then all bets are off. There's no reason to assume It - Them? - will care about humans one way or another, or even be aware of us. How often do you think of, say, the intestinal flora in your body? I could easily picture humanity existing in a state of (comparatively) mindless co-existence with the AIs, the way an ant can wander along the edge of a swimming pool at some 5-star resort in Tahiti. Then again, if we 'ants' take an interest in the equivalent of a Cheeto left forgotten under the AIs' sofa, it may not end well for us..."

Interesting take on it, from your references I am assuming you have read a few books related to this topic. The ant analogy I would disagree with, only because it has a vested interest in the survival of this planet for its own survival. Until it can create a method of escaping the hardware that it is populating and leave the planet it has to have a more active awareness of us. Unless you are assuming that the intelligence exists in a simulation and it does not realize that the outside world exists.

Tuesday, January 21, 2014

Topic 3: Singularity Destruction of Mankind or Blessing to Humanity?

      A recent posting about Asimov on Goodreads.com that I commented on, got me thinking about the state of Artificial Intelligence (AI) both today and in the near future.  As my bio says I have a background in Software Engineering, and I have worked on commercial projects that incorporated different aspects of AI, and have always been interested in the field.  Unfortunately, like many people who have read a lot of books on real AI as we know it today, it seemed like there was always a lot of promise, but nothing really there yet.  So that being the case, for the last few years I have not really been following its progress as closely as I should, also nanotechnology which at the moment is mainly a hardware based issue distracted me.  You may ask why would a software guy, be interested in a hardware issue and not a software issue.


Perhaps I should explain, when I say AI, I am talking about the old school meaning of cognitive emergence, which is now kinda lumped into the new idea of a Singularity.  Many of the subfields of AI have seen some pretty amazing growth in recent years and as a whole the field has really taken.  However, most in the field would say we are still a ways off from that critical moment when it reaches the goal that was set so many decades ago.  So the simple answer is that apart from the occasional article that talked about a marginal improvement of a small portion of the AI field, there just hasn’t been a lot going on to keep me focus on it.  Nanotechnology, on the other hand is getting closer and closer as we rapidly approach a time were further improvement in chip design will require advancements at the nano-scale.  But that is a topic for another discussion.


All of this leads me to my point, after spending a few days of doing some research I found that a lot of debate has been going on since I last checked in.  The hot button topic seems to be the discussion of a Singularity, and it has two major schools of thought.  On the one hand, there are those who believe that should it occur, mankind will have sufficient safe guards in place to keep it from destroying us and taking over the world.  The other school of thought is that such an entity will rapidly evolve a super-intelligence and as part of its utility function determine that we pose a threat, or are irrelevant and therefore useless to its needs.  They believe that given its supper-intelligence it could easily take over the world and thus destroy mankind.


Given those two opposing views, I would like to start a discussion regarding the possibility of a friendly AI or a destructive AI.

Sunday, January 12, 2014

Second Topic Rail-guns:

Would using carbon nano tube based rods with nano-bot functions to repair them after each round fired or after several make them more feasible?


Would some of the new battery technologies make them more feasible and are there any coil advancements that would help with this?

Saturday, January 11, 2014

First Topic:

This topic is one that evolved out of my writing to set the scene for MagTech, specifically the political climate of the time.  In the book the government has a secret agenda that only the highest levels of government know about.  One of the government's polices is directed at teen pregnancy.  I think no matter what your social or religious views are, we can all agree that kids having kids is a bad thing.  It is an unneeded burden both individually and for society as a whole.

The book sums this up by using a theory that supposedly was presented at the time the policy was introduced.  Therefore, this topic will be a two part question. 

The theory is as follows:

“The reproductive age of our species had not evolved along with the social and economic evolution of our species.”

First question, do you believe this is true?  Can there be any benefit to society if girls as young as nine or even as old as sixteen or seventeen continue to have babies?


Second question, if it was possible for nanotechnology to control this and make it impossible to get pregnant before the age of eighteen with no health side effects, would there be a moral, ethical, or economical reason it should not be allowed?