A new post published on the ILLC Blog by Iris Proff recounts the origins of AI research at the University of Amsterdam.
Since its very beginning, AI has been a field characterized by inflated hopes and big disappointments. Often, it seemed like there were no problems that machines could not solve and that big breakthroughs were just around the corner. But it was never long until a major drawback of the current approach became apparent, funding dwindled, and the field started moving in another direction.
The AI undergraduate programme at the UvA was launched in 1992, as one of the first of its kind in the Netherlands and the world. At the time, AI was a vastly different field from what it is today. The field was not centered around self-learning algorithms that make many of the current AI applications so powerful. Artificial neural networks existed, but they were not widely used due to a lack in computing power and data. Rather, AI rested on logical reasoning and search algorithms.
In 1992, much of the field revolved around so-called expert systems, which were considered as the first truly useful AI applications. An expert-system tried to capture the knowledge on a certain topic (say, diagnosing infectious diseases) in a long series of if-then clauses. The user could query the system, and then it would produce an answer mimicking the decision-making process of human experts.
During the 1990s, however, criticism of the approach arose, and funding and interest started shrinking. The systems couldn’t meet their promise, as they didn’t possess common sense and couldn’t generalize. By the end of the decade, expert systems had pretty much disappeared. They gave way to new AI approaches applying statistics and learning, rather than hard-coded rules.