The development of artificial intelligence (AI) is a small aspect of the computer revolution; though with the creation of AI we, as humans, are able to improve our quality of life. For example, AI can be used to monitor power production plants or to make machines of all kinds more understandable and under the control of humans; even with all its ability it is unlikely that an artificial intelligence system will be able to replace the human mind.
A standard definition of artificial intelligence is that it is simply the effort to produce on computers forms of behavior that, if they were done by human beings, we would regard as intelligent. But within this definition, there is still a variety of claims and ways of interpreting the results of AI programs. The most common and natural approach to AI research is to ask of any program, what can it do? What are the actual results in terms of output? On this view, what matters about a chess-playing program, for example, is simply how good it is. Can it, for example, beat chess grand masters? But there is also a more theoretically oriented approach in artificial intelligence, which was the basis of the AI contribution to the new discipline of cognitive science. According to this theoretical approach, what matters are not just the input-output relations of the computer but also what the program can tell us about actual human cognition (Ptack, 1994). .
Viewed in this light, AI aims to give not just a commercial application but a theoretical understanding of human cognition. To make this distinction clear, think of your pocket calculator. It can outperform any living mathematician at multiplication and division and so qualifies as intelligent on the definition of artificial intelligence I just gave. But this fact is of no psychological interest because such computers do not attempt to mimic the actual thought processes of people doing arithmetic (Crawford, 1994).