Linear Discriminant and Support Vector Classiers
Isabelle Guyon and David G. Stork
In Smola et al Eds. Advances in Large Margin Classiers. Pages 147--169, MIT Press.

Support Vector Machines were introduced and recently applied to classication problems as alternatives to multi-layer neural networks. The high generalization ability provided by Support Vector Classiers (SVCs) has inspired recent work on computational speedups as well as the fundamental theory of model complexity and generalization. At first glance, a Support Vector Classier appears to be nothing more than a generalized linear discriminant in a high dimensional transformed feature space; indeed, many aspects of SVCs can best be understood in relation to traditional linear discriminant techniques.

This chapter explores interconnections between many linear discriminant techniques, including Perceptron, Radial Basis Functions (RBFs) and SVCs. The principle of duality between learning- or feature-based techniques (such as Perceptrons) and memory- or example-based methods (such as RBFs) is central to the development of SVCs. We provide several other examples of duality in linear discriminant learning algorithms.

[ next paper ]