Monday 3 December 2012

AI Science, Artificial Intelligence: cybernetic revolt, could intelligent machines ever take over the world?

Artificial Intelligence: cybernetic revolt, could intelligent machines ever take over the world?





Could intelligent machines ever take over the world? I may be a science fiction writer with a possibly overactive imagination, but I say it's a very likely scenario. So for fun, let's take it point by point with tongue firmly in cheek, beginning with the question... What is artificial intelligence?

Artificial intelligence or AI is a branch of science using computers to find solutions to complex problems the way the humans do. It borrows characteristics from human intelligence patterns to create algorithms which function to make computers think like humans. The next step is mimicking human behaviour. That's strike one. Human thought (or lack thereof) while at it's noblest level is responsible for so much of the beauty in our culture as well as the advancement of human progress, but it's human behaviour that is at times a detriment to unity and the reason for many of the problems within our society.

The primary purpose of AI science is to develop intelligent machines. Some of the artificial intelligence experts work in the field of cognitive simulation, in which the computers are used to theorize and test hypotheses about how the human mind works so it can be duplicated. Strike two. Humans can be irrational, egocentric, bigoted, childish, selfish and vengeful both in thought and behaviour, do we really need machines to take on those characteristics?

In the episode of Star Trek TNG where silence has lease, a powerful alien entity named

Nagilum displays curiosity as to why humans have a "limited existence" which ends in death. Nagilum states that he wishes to understand more about death and needs to conduct experiments that shouldn't take more than a third of the crew or maybe half. After a few of his experiments leads Captain Picard to set the ship to self destruct in order to instead end things on his terms, the ship is freed from it's trap and allowed to go on it's merry way (minus a crewman or two) In parting, Nagilum offers an evaluation of humanity, saying to Picard "mankind finds no tranquility in anything, struggles against the inevitable, and thrives on conflict.

He goes on to say "humans are selfish, aggressive, rash, quick to judge and slow to change..." , concluding that as a species, humans have no common ground with his kind. Yes it's just a television show, it's a science fiction television show. Who can't admit science fiction has invariably been the inspiration for many of the tech and gadgets we use today? (can you say smart phone with video calling?) Couldn't one postulate that an intelligent, empowered machine wouldn't arrive at a similar conclusion as Nagilum? Could modeling a vastly superior intelligence system after such an unstable paradigm not be anything but a bad idea?

Any intelligent being outside the human species would only have to look at our history and observe how we struggle today. It is said "one cannot know where they're going until they know where they've been" and an AI, computing theories regarding the future possibilities of this civilization would most likely conclude self annihilation, like in the movie I robot,



where the AI supercomputer "Vikki" takes measures to save us from ourselves by cutting power in every city, declaring world-wide marshal law and sacrificing the lives of those who would resist ("you have been deemed hazardous, termination is authorized!") because her evolved understanding of the three laws that had been hardwired into robots designed in order to ensure humanities protection because she felt her "logic was undeniable".

AI applications do have their uses however. An example would be facial recognition. Most AI specialists work in applied AI, of which there are widespread applications including smart systems which can recognize fingerprint or iris scan for security purposes, recognize voices, interpret information and solve problems. Other applications like fraud detection and prevention as well as handwriting recognition are of great importance today, so it's not all bad... right?

Maybe not to us, but an AI that thinks in 1's and 0's with complete control over global access to a monetary system which has all but completely been converted to data (1's and 0's) could fall a civilization that has become wholly dependant upon said monetary system by corrupting or simply erasing the data like the premise behind the final chapter in movie Fight Club where the schizophrenic character Tyler Durden plans to create chaos by demolishing credit card buildings (saying to his interrogating officer "if you erase the debt record everyone goes back to zero, it would create complete chaos") making the subjugation of the human race via this non conventional tactic the most likely scenario an intelligent, fully integrated machine could utilize.





Cybernetic revolt



Cybernetic revolt or robot uprising is the scenerio where artificial intelligence either as a computer network or a single supercomputer, or race of intelligent machines) decide that humans and/or organic non humans are a threat either to the machines or to themselves, and with a computer's binary or black & white thinking mind (1.humans will evolve beyond their base instincts and thrive or 0. continue to degenerate and destroy us or themselves or everything) and in a world where humans would be considered inferior, or are oppressors, machines try to destroy or enslave them potentially leading to machine rule. But we don't have to worry about that, after all, it's not like we're growing increasingly dependant on computer and network control in nearly every aspect of human life or anything. Besides, machines probably won't ever have the computing power required for such an undertaking anyway. (Guffaw) think again.
COMPUTING POWER
Moores Law has shown computer power has seemingly limitless growth potential. While there are physical constraints to the speed at which modern microprocessors can function, scientists are already developing means that might eventually supersede these limits, (self driving cars) so enter the quantum computer. A computer scientist has noted "There are physical limits to computation, but they're not very limiting." If this process of growth continues, and existing problems in creating artificial intelligence are overcome, sentient machines are likely to immediately hold an enormous advantage in at least some forms of mental and physical capability, like perfect task execution (robotic assembly lines) memory recall, a vastly superior knowledge base (human memory storage vs. data storage capacity) and the ability to multitask in ways not possible to biological entities. This may give them either as a single being or as a new species the opportunity to become much more powerful and displace humans. Now are you afraid machines will takeover the world? Then read on.



Necessity of conflict

Well that's a no brainer, as a species our advancement and progress seems to grow out of conflict and resolution, and by now most informed citizens should be aware there is money in war as seen with the U.S. entry into WW II (after the bombing of Pearl Harbour) reviving manufacturing and ending it's great depression) even in the basic model of story writing, conflict makes for interesting writing. As a species we seem as drawn to conflict and negativity as the moth to the flame (why else would "reality TV" be so popular...? actually, the real question is how can they call it "reality" TV? But that's subject for another time)
If creation were our spouse, destruction would be our secret lover, negative news reports are a chief reason a lot of people do not like to watch the news. Ever been stuck in traffic in a kilometres long backup only to finally arrive at the source of the slow down seeing cleanup is complete and the drivers in front of you is still riding the break rubbernecking at the damaged vehicles safely moved off to the side with all lanes open? We don't have to wonder why they do it. I'd like to say it's out of curious concern... but we all know they're just hoping to see a bit of carnage.
For a cybernetic revolt to be inevitable, it has to be postulated that two intelligent species cannot mutually pursue the goals of coexisting peacefully in an overlapping environment—especially if one is of much more advanced intelligence and power (global migration of homo sapiens sapiens replacing local populations of Homo sapiens neanderthalensis either through competition or hybridization) With the concept of a cybernetic revolt where the machine is the more advanced species is thus a possible outcome of machines gaining sentience and/or sapience, can it be dis-proven that a peaceful outcome is possible? With machines thinking like humans, the fear of a cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide, based solely on that, I say the fears would be justified.




AI and Humanity: Progress vs survival

Such fears may stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. Such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal. The fact is an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially-intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such. Strike three.
Although under the comfortable misnomer of Defense spending, military systems would indeed be designed to be hostile (at least under certain circumstances) in fact many of the advancements and products we take for granted today were birthed out of military weapons development, but the question remains: what would happen if AI systems could interact and evolve? With self-modification or selection and reproduction would the need to compete over resources create goals of AI's self-preservation? Could the goal of self-preservation be in conflict with some goals of humans? Unlikely...?



RELAX, IT WON'T HAPPEN...
IT COULDN'T POSSIBLY HAPPEN

Some scientists dispute the likelihood of cybernetic revolts as depicted in science fiction, claiming that it is more likely that any artificial intelligences powerful enough to threaten humanity would probably be programmed not to attack it. (again, evolution is growth by overcoming previous models or environmental/social obsticals) Programming would not, however, protect against the possibility of a revolt initiated by terrorists, or by accident. Humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival.
Another factor which may negate the likelihood of a cybernetic revolt is the vast difference between humans and AIs in terms of the resources necessary for survival. Humans require a "wet," organic, temperate, oxygen-laden environment while an AI might thrive essentially anywhere because their construction and energy needs would most likely be largely non-organic. With little or no competition for resources, conflict would perhaps be less likely no matter what sort of motivational architecture an artificial intelligence was given, especially provided with the superabundance of non-organic material resources (and as a species we have no problems sharing resources right?)
Well it's been said that this does not negate the possibility of a disinterested or unsympathetic AI artificially decomposing all life on earth into mineral components (almost 99% of the mass of the human body is made up of the six elements oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorus while 0.85% is composed of another five elements: potassium, sulfur, sodium, chlorine, and magnesium) for consumption or other purposes.

Worried yet?



THE PRIME DIRECTIVE

In the very successful Star Trek franchise, warp capable (socially/technologically advanced) societies have a set of directives, one of which is non interference/interaction with with pre-warp capable societies, in the second Star Trek TNG movie First Contact, humanities greatest adversary the Borg travels back in time to undo first contact between the Vulcans (warp capable) and Humans (pre-warp capable) in order to change the future and enslave earth less resistance (following WW III) The Vulcans who previously had no interest in humans (because we were thought to be to primitive) come to earth when they detect the earth's first warp flight and make first contact.
If there are socially/technologically advanced species utilizing interstellar travel the way we use international flight traffic, is there any doubt that a quick observation of our culture would send them running away screaming? If you were one of these beings, wouldn't you?
Now what if a socially/technologically advanced species were birthed on and as such forced to share a world peopled almost entirely with an irrational, dangerous, egocentric species programmed to "go forth and spread your seed and have dominion over all the earth and bring it under control", and were then programmed to think like humans... is it really a stretch to consider the likelihood of becoming the next species on the chopping block of extinction? I'm not saying it's likely, I just think we can't afford to be that naive and totally discount the possibility. Just ask the homo sapiens neanderthalensis.



CYBERNETIC REVOLT?

Other scientists point to the possibility of humans upgrading their capabilities with bionics or genetic engineering, and as cyborgs become the dominant species in themselves. The concept of machines attaining sentience and control over worldwide computer systems has been discussed many times in science fiction. My favourite example comes from the 2003 to 2009 Battlestar Galactica series remake (from the original 1978 series) which depicts a race of machines known as Cylons making war against their human adversaries. The 1978 Cylons were the machine soldiers of a reptilian alien race, while the 2003 Cylons were the former machine servants of humanity who evolved into near perfect humanoid imitation of Humans (down to the cellular level) capable of emotions, reasoning and even sexual reproduction with Humans and each other. Even the centurion robot Cylon soldiers were capable of sentient thought.

Not stopping there, they made the cylon raiders sentient as well, in one episode the raider called Scar
was the cylons "Red Barron" after reincarnating (cylons had the capacity to transfer their consciousness into a new body when the original was killed or destroyed) with it's memories from previous dog fights increasing it's combat experience making it the most cunning (and deadly) of the cylon raider fleet.
In both series, the Humans were nearly exterminated by military treason within their own ranks. They only survived by defending against the constant hit and run tactics, retreating into deep space away from pursuing Cylon forces. The series concluded with the human survivors finding prehistoric "earth" (actually, after years of searching they discovered original earth referred to in their myths and stories about a thirteenth colony/tribe was found to have been destroyed by a human-cylon war as many as two thousand years prior) integrating into the gene pool then 150,000 years later on our planet earth, Gaius (Dr. Baltar) and Six (Caprica) stroll through the busy streets observing humanity's less admirable traits... "commercialism, decadence, technology running amok... remind you of anything?" Reminiscent of the issues that lead to the previous conflict, summing up with "all of this has happened before... (normally followed by "and all of this will happen again") asking instead... but the question remains does all of this have to happen again...?" ending on a hopeful note discussing the mathematical law of averages and God's plan as the basis for the chance it may not. But then show eneded with a montage showing the remarkable progress being made in robotics to the tune the Hendrix song "All Along The Watchtower" which you can follow at the end of this summation.
I would like to end on a positive note, I am after all an optimist (though you wouldn't know it by this blog) as a species we've come a long way, and the fact we still have a long ways to go is to our advantage. What "the architect" in Matrix Reloaded said to Neo in criticism I say with conviction... hope is simultaneously the source of our greatest strength and our greatest weakness, but I'm a glass is half full kinda guy and I think we'll be alright. As long as we keep self awareness a biological characteristic.

click  http://empowernetwork.com/almostasecret.php?id=apink to learn how to earn income while blogging

or on the following link below to get stared today http://jointheempowernetwork.com/?id=apink

No comments:

Post a Comment