Topic 3: Singularity Destruction of Mankind or Blessing to Humanity?
A recent posting about Asimov on Goodreads.com that I commented on, got me
thinking about the state of Artificial Intelligence (AI) both today and in the
near future. As my bio says I have a
background in Software Engineering, and I have worked on commercial projects
that incorporated different aspects of AI, and have always been interested in
the field. Unfortunately, like many
people who have read a lot of books on real AI as we know it today, it seemed
like there was always a lot of promise, but nothing really there yet. So that being the case, for the last few
years I have not really been following its progress as closely as I should, also
nanotechnology which at the moment is mainly a hardware based issue distracted
me. You may ask why would a software guy,
be interested in a hardware issue and not a software issue.
Perhaps I should explain, when I say AI, I am talking about the old
school meaning of cognitive emergence, which is now kinda lumped into the new idea
of a Singularity. Many of the subfields of
AI have seen some pretty amazing growth in recent years and as a whole the
field has really taken. However, most in
the field would say we are still a ways off from that critical moment when it reaches
the goal that was set so many decades ago.
So the simple answer is that apart from the occasional article that
talked about a marginal improvement of a small portion of the AI field, there just
hasn’t been a lot going on to keep me focus on it. Nanotechnology, on the other hand is getting
closer and closer as we rapidly approach a time were further improvement in
chip design will require advancements at the nano-scale. But that is a topic for another discussion.
All of this leads me to my point, after spending a few days of doing
some research I found that a lot of debate has been going on since I last
checked in. The hot button topic seems
to be the discussion of a Singularity, and it has two major schools of thought. On the one hand, there are those who believe that
should it occur, mankind will have sufficient safe guards in place to keep it
from destroying us and taking over the world.
The other school of thought is that such an entity will rapidly evolve a
super-intelligence and as part of its utility function determine that we pose a
threat, or are irrelevant and therefore useless to its needs. They believe that given its supper-intelligence
it could easily take over the world and thus destroy mankind.
Given those two opposing views, I would like to start a discussion regarding
the possibility of a friendly AI or a destructive AI.
No comments:
Post a Comment