Semester.ly

Johns Hopkins University | EN.553.747

Mathematics of Data Science

3.0

credits

Average Course Rating

(-1)

The core objective of this course is to address the pivotal question: How does one optimally extract information from high-dimensional data? In pursuit of this, we bridge materials from diverse fields like high-dimensional probability and statistics, signal processing, and optimization. Given the emergent nature of this research area, acquiring the right tools can often be an overwhelming and unsystematic endeavor. This course is tailored to overcome this challenge, offering a systematic introduction and exploration of critical topics, such as: * Concentration of Measure Phenomena for random vectors and matrices, including subGaussian vectors, McDiarmid, Lipschitz functions, empirical processes, Rademacher complexity, and matrix extensions. * Linear Dimension Reduction techniques, encompassing the Johnson-Lindenstrauss Lemma, Gordon's escape through a mesh Theorem, Randomized Numerical Linear algebra, and applications in Compressed Sensing. * Estimation in High Dimensions, diving into Convex Relaxations and Spectral Methods covering problems like the stochastic block model, and max cut. We will also discuss nonconvex optimization methods, with emphasis on first-order methods and low-rank matrix estimation, such as matrix sensing and completion. * Additional Potential Topics may range from Information-theoretic lower bounds, learning with kernels, overparametrization and double descent, to Markov Decision Processes.

No Course Evaluations found

Lecture Sections

(01)

No location info
M. Diaz Diaz
09:00 - 10:15