Can computers think?

Can computers think? It seems like a simple question to answer, obviously they cannot, right? The question I pose in response to this is, can humans think? It seems obvious as well, humans are conscious and make decisions, therefore, humans think?

Where do humans become special and distinct from the capabilities of computers? Human brains and computers have been compared to each other since the beginning of the notion of cognitive science for many purposes. In the beginning the idea was to make the comparison to make a more human computer, but now the comparison is more often used to simplify the understanding of the human brain by using the computer as an analogy. The real question here though is, how far off is the analogy? Are we discussing things that are really that dissimilar, or are we discussing essentially different implementations (organic vs. electronic) of the same system?

When considered at the most basic level computers and the human brain have the same essential structures. Individual neurons are no sophisticated or interesting than a core of a processor, or a decision making circuit. There is no component of the human brain that is uniquely human. For the purposes of my analysis I will provide the following criteria that are considered to be components of human cognition that are “special”: self-awareness (often qualifier for sentience), emotion, adaptation, intentionality, and free will/sapience.

  • Self-awareness: humans are self-aware because we have the capacity to recognize that we exist and that certain tasks we take have an impact on our environment, or as a more common test, we see that what appears in a mirror is ourselves. Many robots have been programmed to pass this very test, and can recognize themselves with a great level of accuracy. More traditional computer-based intelligences can probably tell you all about themselves.
  • Emotion: Emotion is a really interesting concept for artificial intelligence because it is difficult to prove that it is real. Humans have universal emotional states that are somewhat determined by biology, but only the most basic emotions. This “innate” emotional state does not specify what stimuli triggers those responses outside of basic physical stimuli. Physical stimuli are unavoidable because of the fact that they are part of the design of the system, there is nothing special or magical about experiencing pain, as there is a sequence of electrical impulses and chemical reactions which generate that response as being pain. This stimuli is part of the system itself, and therefore is not a uniquely human property. It is possible to give a computer a simulated skin, and when the correct pressure inputs occur, or links are severed, the computer experiences a sensation which it can call “pain”. Other emotions, such as happiness, sadness, closeness, loss and others exist as social constructions. If we did not know that a guy getting kicked in his testicles was a humorous thing to watch based on social learning, we would not laugh. On the other hand, the guy being kicked responds to the occurrence as pain because of the fact that he experiences physical signals to cause that state of alert in the body.  All experiences of emotion are either by something that has been socially learned and conditioned, or through a physical stimulus. All of these things can be trained through programming on a computer. The problem at this point is that no electronic system has enough sensors or enough algorithms to approximate the entirety of human experience.
  • Adaptation: Humans adapt. Humans adapt due to certain aspects of biology. Adaptation is partially taught through example, and is partially a factor of evolution. If you cannot adapt, then you cannot survive, it is as simple as that. At this stage in computing, computers are like children, they run into obstacles and thus need assistance from a human. Ironically, it is mostly obstacles resulting from human involvement that limit the ability for computers to not function by themselves. Programmers make mistakes, users do things that are invalid. A cleanly written piece of software that is trained with adaptation skills through complex problem solving algorithms could adapt as well as or better than a human. The only problem is that first we need a better human. It is all about problem schemas and how easy it is to template a situation.
  • Intentionality: Intention can be related to purpose. Computers are perhaps better than this than humans. Computers are task oriented and are always moving toward a goal, whether it is a calculation or simply trying to interpret data. Computers need humans as a reason for their intentionality, which is in itself a limitation. Why do you do what you do? What drives you? The flaw for computers is that they lack a sense of accomplishment or any type of self-motivation. It is all a matter of programming. The modern human without the context of other humans would likely find themselves wandering aimlessly for something to do.
  • Free Will: The ability to decide what to do and when is an essential part of being human. Do computers do it? Not really, computers do what is prescribed by their programming. Humans are controlled by certain programming as well, mostly their priorities. Computers don’t have free will because they do not independently decide what to do or when. There is a question of determinism here most likely, but that’s a whole field of philosophy by itself. I think that it is possible to give a computer free will, and perhaps there is already the ability somewhat when it comes to computers intelligently making decisions, but that is still in the service of humans, not its own interests.
  • Creativity: What is creativity? This one is perhaps the hardest of the aspects to work toward because of the abstract nature and lack of understanding of the concept. Creativity is often an occurrence in the course of creating something. Ideas come from previous concepts, and inspirations that come from other places. Does anything truly original ever exist though, or are we all imitating each other, imitating the nature around us?

The limitation to the computer as an intelligence at this point is that most of the research is focused on building machines to solve problems, or machines to handle specific tasks that humans do. There is very little work towards an artificial intelligence that is only an intelligence and not trying to solve something else too. There is no AI for the sake of a new intelligence.

I would like to thank my friend Oscar for asking this question, which in turn allowed me to finally articulate my recent internal inquiry about the nature of computer consciousness.