Autors:

CiteWeb id: 20060000061

CiteWeb score: 4796

DOI: 10.1109/TIT.2006.885507

Suppose we are given a vector f in a class FsubeRopf N , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr 2 ) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f| (n) lesRmiddotn -1 p/, where R>0 and p>0. Suppose that we take measurements y k =langf #HASH# ,X k rang,k=1,...,K, where the X k are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0 t , defined as the solution to the constraints y k =langf #HASH# ,X k rang with minimal lscr 1 norm, obeys parf-f #HASH# par lscr2 lesC p middotRmiddot(K/logN) -r , r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed

The publication "Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?" is placed in the Top 10000 of the best publications in CiteWeb. Also in the category Mathematics it is included to the Top 1000. Additionally, the publicaiton "Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?" is placed in the Top 100 among other scientific works published in 2006.
Links to full text of the publication: