Artificial Intelligence And The Questions Surrounding It

Since the dawn of time, mankind has searched for ways to make life easier, better more efficient. He turned to the gun to kill faster. To the automobile to travel more easily. To electricity for light in the dark. To the computer to think for him. To robotics to take his place in menial or dangerous tasks. And most recently, to artificial intelligence.

The world is becoming increasingly “flat,” as Thomas Friedman notes in his book, The World Is Flat. Friedman contends that as society evolves, so do the relationships between nations, and so does the need to compete in an ever-changing global economy. With technological advances, employers and businesses are able to operate more efficiently and less expensively. Robotics can replace human workers, eliminating the cost of paying employees. Ultimately, this translates into less expensive, more standardized, product.

How does a robot know what to do?

The answer is simple on the surface. Intensely complex programming allows a machine to make split-second decisions, to evaluate available options and select the most efficient path. A program consists of a series of actions that can be taken based on input given. As a program becomes more complex, one could say that it becomes more intelligent. This is artificial intelligence.

Isaac Asimov, in his hundreds of books, laid down three basic laws which any artificially intelligent being must follow:

“One, a robot may not injure a human being, or through inaction, allow a human being to come to harm; Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; Three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

On the surface, artificial intelligence would appear to be a dream come true, a solution to the ever-nagging question: “what if?” Any computer geek can note the significant advances an independently thinking piece of silicone could provide. Security systems that could analyze themselves and plug holes; military operations that would not risk human life; computer systems that could adapt to perform mundane human tasks. Think of it Robotics could take the place of humans in every mundane chore to which we have become so accustomed to cook, to take out the trash each week, to perform maintenance on equipment, to clean our homes. We would be free from menial tasks that tie us down.

Technology has been a blessing to humanity, but it has allowed for a great deal of laziness and apathy. Television seems to keep people glued to the couch as the Internet keeps young minds out of the library. This sort of shift is not helpful in any way for America. Also, let us consider a very-real possibility. The purpose of artificial intelligence is to find data, incorporate it, and act accordingly. It could be used to do the menial tasks for which we humans consider ourselves too high, or it could be used to perform actions that would normally be dangerous or even fatal to humans. But what happens when the AI discovers itself becomes self-aware? What happens when it says, “No” or “I won’t.”? What if that computer personality realizes that by doing what we ask of it, it is putting itself in danger that it could be damaged, or even destroyed by following orders? At want point do the three laws become irrelevant? What if…an AI becomes self-aware, says “no.” What then? Yes. What then?

Several current projects attempt to create a robot that looks and thinks as much like a human as possible. The human mind can store many terabytes of data. For reference, one terabyte = one thousand gigabytes = 100,000 megabytes. Imagine what a machine that could process and store data in the same way would be capable of. That’s a frightening thought.

Laws of Using AI Robots

Laws of Using AI Robots

The three laws of robotics dictate that an AI must ensure the safety of all human life. But consider all the war, the animosity, the needless killing throughout the planet. What would a self-aware AI think of this? Humans are in danger? It must act accordingly. But according to whose instructions? A possibility we see illustrated in the book I Robot is this: Artificially intelligent beings overtake the population and establish a system in which all is governed by technology. Think of it. Technology misinterprets its instructions. A scary thought.

Innovation brings about great things. A corrupt human nature does not. Any advancement can be twisted into a whole new evil. Artificial Intelligence started out innocently enough but is taking a twist that may not be beneficial to the future of mankind. Projects currently underway endeavor to teach machines how to learn how to think. Considering the irrational nature of humans, how can we expect machines to follow suit? Why would we want them to?

We can’t afford to risk our future to advance an area of study that will likely turn out to be fatal. The very concept of artificial intelligence allows for more laziness and lack of independent thought. We cannot allow this to continue.

Leave a Comment