High-Dimensional Approximation, Probability, and Statistical Learning
3.0
creditsAverage Course Rating
The course covers fundamental mathematical ideas for certain approximation and statistical learning problems in high dimensions. We start with basic approximation theory in low-dimensions, in particular linear and nonlinear approximation by Fourier and wavelets in classical smoothness spaces, and discuss applications in imaging, inverse problems and PDE’s. We then introduce notions of complexity of function spaces, which will be important in statistical learning. We then move to basic problems in statistical learning, such as regression and density estimation. The interplay between randomness and approximation theory is introduced, as well as fundamental tools such as concentration inequalities, basic random matrix theory, and various estimators are constructed in detail, in particular multi scale estimators. At all times we consider the geometric aspects and interpretations, and will discuss concentration of measure phenomena, embedding of metric spaces, optimal transportation distances, and their applications to problems in machine learning such as manifold learning and dictionary learning for signal processing.