A geometric approach to support vector classification
thesisposted on 17.02.2017, 04:40 authored by Goodrich, Ben David
The Support Vector Machine (SVM) is a supervised classification technique which has been applied to a broad range of tasks. There has also been a significant amount of research focused on making SVMs more accurate or efficient. Past work has been approached from many different perspectives. For example, while some authors interpret an SVM as a method for statistical regularization, others interpret it as a large margin classifier. Others still simply consider it as a type of black box which results from a Quadratic Programming (QP) task. More recently, SVMs have been interpreted as the result of a nearest point algorithm operating over the Reduced Convex Hulls (RCHs) of two classes. This has come to be known as the geometric interpretation of SVMs. Previous work has described the relationship between SVMs and computational geometry. However, it has focused largely on using this information in order to present novel SVM training algorithms. We use the geometric interpretation of SVMs in order to unify a broad range of existing research, and also to inform several new techniques. The main contributions of this work can be divided into two parts. First, we examine existing work from a geometric perspective. This proves invaluable for understanding how and why SVMs work, the impact their parameters have, their relationship to other classifiers, and the circumstances under which they can exhibit poor performance. Second, we use the geometric framework to inform new methods for solving SVM-related tasks such as training, point weighting, and parameter selection. One of the ways in which we achieve this is by generalizing some of the geometric concepts underlying SVMs. For example, one of the concepts we introduce is that of Weighted Reduced Convex Hulls (WRCHs), a generalized form of RCH. By closely examining the properties of RCHs and WRCHs, we are able to propose several algorithms for their construction. In turn, this allows us to incorporate new types of SVMs and their training algorithms under the geometric framework.