par Bradshaw, Nicholas
Référence Lecture notes in computer science, 1327, page (511-516)
Publication Publié, 1997
Article révisé par les pairs
Résumé : One family of classifiers which lias has considerable experimental success over the last thirty year is that of the ?i-tuple classifier and its descendents. However, the theoretical basis for such classifiers is uncertain despite attempts from time to time to place it in a statistical framework. In particular the most commonly used training algorithms do not even try to minimise recognition error on the training set. In this paper the tools of statistical learning theory are applied to the classifier in an attempt to describe the classifier's effectiveness. In particular the effective VC dimension of the classifier for various input distributions is calculated experimentally, and these results used as the basis for a discussion of the behaviour of the n-tuple classifier. As a side-issue an error-minimising algorithm for the u-tuple classifier is also proposed and briefly examined.