Activity: Talk and presentation › Professional or public presentation › Professional
Description
How deep is deep and what's next in computational intelligence?
We are currently experiencing the third wave of neural-network research. For a researcher who joined in on the second wave, i.e., post Rosenblatt, about 1987, one would expect happiness with the current successes in deep learning ('See? We told you so!'). Indeed, the presence of big data and sufficient computing resources have resulted in exciting progress. However, it is also time for a critical evaluation. Rather than constituting the long-heralded computational intelligence, deep learning is unfortunately again and still mainly concerned with intelligent humans spending expensive labor hours on training, retraining and tinkering network architectures. Experiments are concerned with closed data sets, such that even the results of a thorough k-fold evaluation are not a good predictor for performance in the real world. At the same time, human cognition can handle many problems that represent 'one-shot learning', solving new puzzles without any training sample. Also, with proper feature schemes and distance functions, even nearest-neighbor and nearest-mean classifiers have an attractive performance level in big data, with the additional advantage that training is trivial. I will illustrate these insights on the basis of our experience with a 24/7 learning system for retrieval of words in massive historical manuscript collections: Monk.
Period
24-Oct-2016
Event title
International Conference on Frontiers in Handwriting Recognition