Artficial Intelligence

Expanse

I.
INTRODUCTION

Artificial Intelligence (AI), the study and engineering of intelligent machines capable of performing the same kinds of functions that characterize human thought. The concept of AI dates from ancient times, but the advent of digital computers in the 20th century brought AI into the realm of possibility. AI was conceived as a field of computer science in the mid-1950s. The term AI has been applied to computer programs and systems capable of performing tasks more complex than straightforward programming, although still far from the realm of actual thought. While the nature of intelligence remains elusive, AI capabilities currently have far-reaching applications in such areas as information processing, computer gaming, national security, electronic commerce, and diagnostic systems.

II.
DEVELOPMENT OF ARTIFICIAL INTELLIGENCE

In 1956 American social scientist and Nobel laureate Herbert Simon and American physicist and computer scientist Allan Newell at Carnegie Mellon University in Pennsylvania devised a program called Logic Theorist that simulated human thinking on computers. The first AI conference occurred at Dartmouth College in New Hampshire in 1956. This conference inspired researchers to undertake projects that emulated human behavior in the areas of reasoning, language comprehension, and communications. In addition to Newell and Simon, computer scientists and mathematicians Claude Shannon, Marvin Minsky, and John McCarthy laid the groundwork for creating “thinking” machines from computers.
The search for AI has taken two major directions: psychological and physiological research into the nature of human thought, and the technological development of increasingly sophisticated computing systems. Some AI developers are primarily interested in learning more about the workings of the human brain and thus attempt to mimic its methods and processes. Other developers are more interested in making computers perform a specific task, which may involve computing methods well beyond the capabilities of the human brain.
Contemporary fields of interest resulting from early AI research include expert systems, cellular automata (treating pieces of data like biological cells), and artificial life (see Automata Theory). The search for AI goes well beyond computer science and involves cross-disciplinary studies in such areas as cognitive psychology, neuroscience, linguistics, cybernetics, information theory, and mechanical engineering, among many others. The search for AI has led to advancements in those fields, as well.

III.
USES AND CHALLENGES OF ARTIFICIAL INTELLIGENCE


Kasparov v. Deep Blue

 
















Kasparov v. Deep Blue

In 1997 Russian chess master Garry Kasparov lost a highly publicized series of matches to an IBM computer named Deep Blue. The computer used artificial intelligence to process 200 million chess moves per second in developing its strategy. This was the first time that an international chess grand master had lost a series to a computer, suggesting to some observers that advances in artificial intelligence may be surpassing human capacity in some areas.
























AI programs have a broad array of applications. They are used by financial institutions, scientists, psychologists, medical practitioners, design engineers, planning authorities, and security services, to name just a few. AI techniques are also applied in systems used to browse the Internet.
AI programs tend to be highly specialized for a specific task. They can play games, predict stock values, interpret photographs, diagnose diseases, plan travel itineraries, translate languages, take dictation, draw analogies, help design complex machinery, teach logic, make jokes, compose music, create drawings, and learn to do tasks better. AI programs perform some of these tasks well. In a famous example, a supercomputer called Deep Blue beat world chess champion Garry Kasparov in 1997. In developing its strategy, Deep Blue utilized parallel processing (interlinked and concurrent computer operations) to process 200 million chess moves per second. AI programs are often better than people at predicting stock prices, and they can create successful long-term business plans. AI programs are used in electronic commerce to detect possible fraud, using complex learning algorithms, and are relied upon to authorize billions of financial transactions daily. AI programs can also mimic creative human behavior. For example, AI-generated music can sound like compositions by famous composers.
Some of the most widely used AI applications involve information processing and pattern recognition. For example, one AI method now widely used is “data mining,” which can find interesting patterns in extremely large databases. Data mining is an application of machine learning, in which specialized algorithms enable computers to “learn.” Other applications include information filtering systems that discover user interests in an online environment. However, it remains unknown whether computer programs could ever learn to solve problems on their own, rather than simply following what they are programmed to do.

WABOT-2 and Inventor

 















WABOT-2 and Inventor

An inventor plays a duet with WABOT-2, developed in the 1980s in Japan as one of the world’s first “personal” robots. It represented a milestone in robotics as one of the earliest examples of a robot using artificial intelligence (AI) programming. The programming enabled the robot to play a musical keyboard with its human-like hands, read sheet music with its electronic eye, and even hold a rudimentary conversation with people.
























AI programs can make medical diagnoses as well as, or better than, most human doctors. AI programs have been developed that analyze the disease symptoms, medical history, and laboratory test results of a patient, and then suggest a diagnosis to the physician. The diagnostic program is an example of expert systems, which are programs designed to perform tasks in specialized areas as a human would. Expert systems take computers a step beyond straightforward programming, being based on a technique called rule-based inference, in which preestablished rule systems are used to process the data. Despite their sophistication, expert systems still do not approach the complexity of true intelligent thought.
Despite considerable successes AI programs still have many limitations, which are especially obvious when it comes to language and speech recognition. Their translations are imperfect, although good enough to be understood, and their dictation is reliable only if the vocabulary is predictable and the speech unusually clear. Research has shown that whereas the logic of language structure (syntax) submits to programming, the problem of meaning (semantics) lies far deeper, in the direction of true AI (or “strong” AI, in the parlance of developers). Developing natural-language capabilities in AI systems is an important focus of AI research. It involves programming computers to understand written or spoken information and to produce summaries, answer specific questions, or redistribute information to users interested in specific areas. Essential to such programs is the ability of the system to generate grammatically correct sentences and to establish linkages between words, ideas, and associations with other ideas. “Chatterbot” programs, although far from natural conversationalists, are a step in that direction. They attempt to simulate an intelligent conversation by scanning input keywords to come up with pre-prepared responses from a database.
Much work in AI models intellectual tasks, as opposed to the sensory, motor, and adaptive abilities possessed by all mammals. However, an important branch of AI research involves the development of robots, with the goal of creating machines that can perceive and interact with their surroundings. WABOT-2, a robot developed by Waseda University in Japan in the 1980s, utilized AI programs to play a keyboard instrument, read sheet music, and converse rudimentarily with people. It was a milestone in the development of “personal” robots, which are expected to be anthropomorphous—that is, to emulate human attributes. AI robots are being developed as personal assistants for hospitalized patients and disabled persons, among other purposes. Natural-language capabilities are integral to these efforts. In addition, scientists with the National Aeronautics and Space Administration (NASA) are developing robust AI programs designed to enable the next generation of Mars rovers to make decisions for themselves, rather than relying on (and waiting for) detailed instructions from teams of human controllers on Earth.
To match everything that people can do, AI systems would need to model the richness and subtlety of human memory and common sense. Many of the mechanisms behind human intelligence are still poorly understood, and computer programs can simulate the complex processes of human thought and cognition only to a limited extent. Even so, an AI system does not necessarily need to mimic human thought to achieve an intelligent answer or result, such as a winning chess move, as it may rely on its own “superhuman” computing power.

IV.
TYPES OF ARTIFICIAL INTELLIGENCE

Work in AI has primarily focused on two broad areas: developing logic-based systems that perform common-sense and expert reasoning, and using cognitive and biological models to simulate and explain the information-processing capabilities of the human brain. In general, work in AI can be categorized within three research and development types: symbolic, connectionist, and evolutionary. Each has characteristic strengths and weaknesses.

A.
Symbolic AI

Symbolic AI is based in logic. It uses sequences of rules to tell the computer what to do next. Expert systems consist of many so-called IF-THEN rules: IF this is the case, THEN do that. Since both sides of the rule can be defined in complex ways, rule-based programs can be very powerful. The performance of a logic-based program need not appear “logical,” as some rules may cause it to take apparently irrational actions. “Illogical” AI programs are not used for practical problem-solving, but are useful in modeling how humans think. Symbolic programs are good at dealing with set problems, and at representing hierarchies (in grammar, for example, or planning). But they are inflexible: If part of the expected input data is missing or mistaken, they may give a bad answer, or no answer at all.

B.
Connectionist AI


Artificial Neural Network

 
















Artificial Neural Network

The neural networks that are increasingly being used in computing mimic those found in the nervous systems of vertebrates. The main characteristic of a biological neural network, top, is that each neuron, or nerve cell, receives signals from many other neurons through its branching dendrites. The neuron produces an output signal that depends on the values of all the input signals and passes this output on to many other neurons along a branching fiber called an axon. In an artificial neural network, bottom, input signals, such as signals from a television camera’s image, fall on a layer of input nodes, or computing units. Each of these nodes is linked to several other “hidden’ nodes between the input and output nodes of the network. There may be several layers of hidden nodes, though for simplicity only one is shown here. Each hidden node performs a calculation on the signals reaching it and sends a corresponding output signal to other nodes. The final output is a highly processed version of the input.
























Connectionism is inspired by the human brain. It is closely related to computational neuroscience, which models actual brain cells and neural circuits. Connectionist AI uses artificial neural networks made of many units working in parallel. Each unit is connected to its neighbors by links that can raise or lower the likelihood that the neighbor unit will “fire” (excitatory and inhibitory connections, respectively). Neural networks that are able to learn do so by changing the strengths of these links, depending on past experience. These simple units are much less complex than real neurons. Each can do only one thing, such as report a tiny vertical line at a particular place in an image. What matters is not what any individual unit is doing, but the overall activity pattern of the whole network.
Consequently, connectionist systems are more flexible than symbolic AI programs. Even if the input data is faulty, the network may give the right answer. They are therefore good at pattern recognition, where the input patterns within a certain class need not be identical. But connectionism is weak at doing logic, following action sequences, or representing hierarchies of goals. What symbolic AI does well, connectionism does badly, and vice versa. Hybrid systems combine the two, switching between them as appropriate. And work on recurrent neural networks, where the output of one layer of units is fed back as input to some previous layer, aims to enable connectionist systems to deal with sequential action and hierarchy. The emerging field of connectomics could help researchers decode the brain’s approach to information processing. See Neurophysiology; Nervous System.

C.
Evolutionary AI

Evolutionary AI draws on biology. Its programs make random changes in their own rules, and select the best daughter programs to breed the next generation. This method develops problem-solving programs, and can evolve the “brains” and “eyes” of robots. A practical application of evolutionary AI would be a computer model of the long-term growth of a business in which the evolution of the business is set within a simulated marketplace. Evolutionary AI is often used in modeling artificial life (commonly known as A-Life), a spin-off from AI. One focus of study in artificial life is on self-organization, namely how order arises from something that is ordered to a lesser degree. Biological examples include the flocking patterns of birds and the development of embryos. Technological examples include the flocking algorithms used for computer animation.

V.
PHILOSOPHICAL DEBATES OVER ARTIFICIAL INTELLIGENCE


Alan Turing

 
















Alan Turing

Considered a forerunner in the field of electronic computers, Alan Turing envisioned a device that could, in theory, perform any calculation. Referred to as the Turing Machine, it was designed to “read” commands and data from a long piece of tape, using a table to determine the order in which the required operations would be carried out. In the related field of artificial intelligence, he originated the “Turing test,” a process designed to determine if a computer can “think” like a human.
























People often ask if artificial intelligence is possible, but the question is ambiguous. Certainly, AI programs can produce results that resemble human behavior. Some things that most people once assumed computers could never do are now possible due to AI research. For example, AI programs can compose aesthetically appealing music, draw attractive pictures, and even play the piano “expressively.” Other things are more elusive, such as producing perfect translations of a wide range of texts; making fundamental, yet aesthetically acceptable, transformations of musical style; or producing robots that can interact meaningfully with their surroundings. It is controversial whether these things are merely very difficult in practice, or impossible in principle.
The larger question of whether any program or robot could really be intelligent, no matter how humanlike its performance, involves highly controversial issues in the philosophy of mind, including the importance of embodiment and the nature of intentionality and consciousness. Some philosophers and AI researchers argue that intelligence can arise only in bodily creatures sensing and acting in the real world. If this is correct, then robotics is essential to the attempt to construct truly intelligent artifacts. If not, then a mere AI program might be intelligent.
British mathematician and computer scientist Alan Turing proposed what is now called the Turing Test as a way of deciding whether a machine is intelligent. He imagined a person and a computer hidden behind a screen, communicating by electronic means. If we cannot tell which one is the human, we have no reason to deny that the machine is thinking. That is, a purely behavioral test is adequate for identifying intelligence (and consciousness).
American philosopher John Searle has expressed a different view. He admits that a program might produce replies identical to those of a person, and that a programmed robot might behave exactly like a human. But he argues that a program cannot understand anything it says. It is not actually saying or asserting anything at all, but merely outputting meaningless symbols that it has manipulated according to purely formal rules—in other words, all syntax and no semantics. Searle asserts that human brains can ascribe meaning to symbols, thus deriving understanding, whereas metal and silicon cannot. No consensus exists in either AI or philosophy as to whose theory, Turing’s or Searle’s, is right.
Whether an AI system could be conscious is an especially controversial topic. The concept of consciousness itself is ill-understood, both scientifically and philosophically. Some would argue that any robot, no matter how superficially humanlike, would never possess the consciousness or sentience of a living being. But others would argue that a robot whose functions matched the relevant functions of the brain (whatever those may be) would inevitably be conscious. The answer has moral implications: If an AI system were conscious, it would arguably be wrong to “kill” it, or even to use it as a “slave.” See also States of Consciousness.

VI.
THE FUTURE OF ARTIFICIAL INTELLIGENCE


Humanoid Robot ASIMO Walks Down Stairs

 
















Humanoid Robot ASIMO Walks Down Stairs

ASIMO is a humanoid robot designed by Japanese engineers at the Honda Motor Company. The 4-foot-tall robot first appeared in public in 2000. It is capable of walking and running like a human, and can climb stairs and reach for objects. The name ASIMO stands for Advanced Step in Innovative Mobility. The name may also honor the science fiction writer Isaac Asimov, who wrote stories about intelligent robots.
























Building intelligent systems—and ultimately, automating intelligence—remains a daunting task, and one that may take decades to fully realize. AI research is currently focused on addressing existing shortcomings, such as the ability of AI systems to converse in natural language and to perceive and respond to their environment. However, the search for AI has grown into a field with far-reaching applications, many of which are considered indispensable and are already taken for granted. Nearly all industrial, governmental, and consumer applications are likely to utilize AI capabilities in the future.

Comments

Popular posts from this blog

who is an african philosopher? (By: Dabo Euclid)

JOHN DEWEY ft plato, aristotle, john lock and Dabo Eucld on Education.