The basic concepts of compressed sensing were introduced around 2006 in the work of David Donoho, Emanuel Candes and Terry Tao. In its most simple form, this theory studies the possibility of recovery of sparse highdimensional vectors from a little set of its linear measurements. Here, a vector is called sparse if it has only small number of non-zero entries. The use of the geometry of the set of sparse vectors allows to show that the number of measurements needed grows only linearly with the sparsity and logarithmically with the dimension of the ambient space. This predetermines the methods of compressed sensing to be of use when dealing with highdimensional structured data.
During the last 15 years, many properties of sparse recovery were investigated: stability with respect to defects of sparsity, robustness to noise, many possible algorithms of the recovery, etc. Furthermore, applications to radar technology, magnetic resonance tomography, image processing and other ares appeared. In GAMS, we study applications of compressed sensing and related ideas in approximation theory, highdimensional geometry and machine learning.
- Prof. Massimo Fornasier, TU Munich, Germany
- Prof. Aicke Hinrichs, JKU Linz, Austria
- Prof. Tino Ullrich, TU Chemnitz, Germany
associate professor at FNSPE CTU in Prague
- M. Fornasier, K. Schnass and J. Vybíral, Learning functions of few arbitrary linear parameters in high dimensions, Found. Comput. Math. 12 (2) (2012), 229-262
- L. M. Ghiringhelli, J. Vybíral, S. V. Levchenko, C. Draxl, and M. Scheffler, Big data of materials science – Critical role of the descriptor, Phys. Rev. Lett. 114, 105503 (2015)
- A. Hinrichs, A. Kolleck and J. Vybíral, Carl’s inequality for quasi-Banach spaces, J. Funct. Anal. 271 (8) (2016), 2293-2307
- A. Kolleck and J. Vybíral, Non-asymptotic Analysis of l1-norm Support Vector Machines, IEEE Trans. Inf. Theory 63, no. 9 (2017), 5461-5476