Do we need to fear artificial intelligence becoming more intelligent than the creator - us? I read an article recently which looked into artificial superintelligence, and how, on establishing what intelligence is, would we be able to instill this into a computer or in an artificial way. Anyway, the article got me thinking on whether or not it would be possible for artificial intelligence to one day surpass our own intellect. In some ways the idea seems absurd but it is not that absurd when you think about it, we have already created machines in specific areas that outperform humans in pretty much every way. The only lacking division is a human capacity in terms of compassion and the ability to truly think.
The article comes to the conclusion that it is highly unlikely we will ever understand the make up of intelligence and, therefore we would be unable to implement it into machinery, even then, if we could work out what intelligence truly consisted of on a fundamental level it may be too convoluted for us to form algorithms complex enough to mimic it. However, the idea of this intrigues me; think about Cleverbot, a robot that was created to mimic human responses in order to fool humans into thinking they were actually talking to a human.
A test was conducted where people were asked to decide whether or not they were talking to a robot or a human based on the answers they received to questions they asked. The test was conducted as if speaking on an internet chatroom and human participants, as well as Cleverbot, partook in the experiment. Out of 334 votes being cast, 59.3% thought they were speaking to a human when talking to Cleverbot, while only 63.3% thought they were talking to a human when they were in fact talking to a human. As you can see, Cleverbot scored very highly on the test.
The reason I have used Cleverbot as an example is because it is programmed to mimic human beings through ongoing interaction with us. As Cleverbot speaks to more humans it essentially evolves the language it uses in order to sound more human and adjudge tone. My argument is that if a robot can learn to mimic a human being in the language it uses, it could necessarily evolve beyond those capabilities given the correct criteria in which to function in.
I don't mean that Cleverbot will now become a full-fledged human who is able to 'love', don't be silly, but if there was a specific algorithm which mimicked human beings on a very fundamental level, there is not reason, in my head, why it couldn't take off and begin to learn to enhance itself - especially if the intelligence consisted of sub-systems, as the article mentioned previously talks about.
The article I read, entitled 'The Creation of a Superintelligence and The End of Enquiry' talks about how intelligence is broken down into various subsystems, and it is possible to categorise these.
The sub-system idea would allow a combination of these to be created, in which an outcome could be derived from the various inputs, essentially creating an intelligence system that makes decisions based on various inputs. If we think about it, this is very much how a human being operates, the five senses can be regarded as differing inputs in this situation. Obviously it is much more complex than this, but in principle we should be able to create something that mimics the process of decision making among other things.
Professor Murray Shanahan, professor of Cognitive Robotics at Imperial College is quoted in this article regarding artificial intelligence, and it is very clear that he believes the development of AI is very much going towards a 'human-level' understanding, although not within the next 20 years. However, the issue is that moral code and constraints are not being put in place at this early stage which would result in AI essentially running amok, if we were not to think about the implications at an early stage it could cause complications in the future.
The article also brings to light that it is unclear whether or not we should try to mimic human nature or start from scratch. This brings forward ideas of how we perceive intelligence rather than how it is formed on a natural level, meaning we could essentially create something without having to break down our own biological capacity for intelligence. Stephen Hawking among others have warned about the potential implications of AI enhancement, some even citing human extinction as a possible outcome.
So what do you think? Can you imagine an AI revolution taking down the puny human beings we all are? Or is it inconceivable to create an AI that is capable of surpassing ourselves? And if it was to happen would we be exterminated, ignored or even just patronised by this superior intelligence? Please share this post, comment below or whatever, hope you enjoyed the read :)