CSci 8363 -- Fall 2017

Daniel Boley

MW 4-5:15pm, AkerH 227

Linear Algebra has contributed many methods for handling very large quantities of numerical data. Here we examine many of these linear algebra methods and how they have been applied to the exploration and analysis of very large data collections. After a brief review of some basic concepts in linear algebra, most of the class will be devoted to how these linear algebra methods have been used in information retrieval, data mining, unsupervised clustering, bioinformatics, social networking, machine learning and the like. Examples of methods we will examine are Latent Semantic Indexing, Least Squares Fit, possibly under a sparsity constraint, Spectral partitioning, Pagerank, Support Vector Machines, and recent ideas on sparse approximation methods using L1 regularization. A collection of basic research papers, some of a tutorial nature, will be used for the class. Examples will be taken from vision recognition systems, biological gene analysis, document retrieval.


Students should be familiar with basic linear algebra concepts and methods such as Gaussian elimination for systems of linear equations, plus some familarity with. concepts such as matrix eigenvalues, singular values, and matrix least squares problems, though some time will be spent reviewing these latter topics. Basic concepts in optimization like first order optimality conditions and duality will also be useful.

Work Plan

Students will be expected to do the following. Your project will count toward the Project Requirements for a Plan C MS degree in Computer Science.

Sample Topics

For Further Information