Five issues to think about before you try to do real AI

Gunnar Grimnes pointed me to this article by Roger
C. Schank

Note the best part here on Five issues to think about before you try to do real AI, which is related to my other rants about researchers who should dig into engineering:

1. Real problems are needed for prototyping. We cannot keep working in toy domains. Real problems identify real users with real needs. This changes what the interactions with the program will be considerably and must be part of the original design.

2. Real knowledge that real domain experts have must be found and stored. This does not mean interviewing them and asking for the rules that they use and ignoring everything else that fails to fit. Real experts have real experiences, contradictory viewpoints, exceptions, confusions, and the ability to have an intuitive feel for a problem. Getting at these issues is critical. It is possible to build interesting systems that do not know what they know. Expertise can be captured in video, stored and indexed in a sound way, and retrieved without having to fully represent the content of that expertise (e.g., the ASK TOM system (Schank, Ferguson, Birnbaum, Barger, Greising, 1991). Such a system would be full of AI ideas, interesting to interact with, and not wholly intelligent but a far sight better than systems that did not have such knowledge available.

3. Software engineering is harder than you think. I can’t emphasize strongly enough how true this is. AI had better deal with the problem.

4. Everyone wants to do research. One serious problem in AI these days is that we keep producing researchers instead of builders. Every new Ph.D. receipient, it seems, wants to continue to work on some obscure small problem whose solution will benefit some mythical program that no one will ever write. We are in danger of creating a generation of computationally sophisticated philosophers. They will have all the usefulness and employability of philosophers as well.

5. All that matters is tool building. This may seem like an odd statement considering my comments about the expert system shell game. However, ultimately we will not be able to build each new AI system from scratch. When we start to build useful systems the second one should be easier to build than the first, and we should be able to train non-AI experts to build them. This doesn’t mean that these tools will allow everyone to do AI on their personal computers. It does mean that certain standard architectures should evolve for capturing and finding knowledge. From that point of view the shell game people were right, they just put the wrong stuff in the shell. The shell should have had expert knowledge about various domains in it, available to make the next system in that domain that much easier to build.