Autors:

CiteWeb id: 20030000044

CiteWeb score: 7307

Once we have developed an algorithm (q.v.) for solving a computational problem and analyzed its worst-case time requirements as a function of the size of its input (most usefully, in terms of the O-notation; see ALGORITHMS, ANALYSIS OF), it is inevitable to ask the question: "Can we do better?" In a typical problem, we may be able to devise new algorithms for the problem that are more and more efficient. But eventually, this line of research often seems to hit an invisible barrier, a level beyond whch improvements are very difficult, seemingly impossible, to come by. After many unsuccessful attempts, algorithm designers inevitably start to wonder if there is something inherent in the problem that makes it impossible to devise algorithms that are faster than the current one. They may try to develop mathematical techniques for proving formally that there can be no algorithm for the given problem which runs faster than the current one. Such a proof would be valuable, as it would suggest that it is futile to keep working on improved algorithms for this problem, that further improvements are certainly impossible. The realm of mathematical models and techniques for establishing such impossibility proofs is called computational complexity.

The publication "Computational complexity" is placed in the Top 10000 of the best publications in CiteWeb. Also in the category Mathematics it is included to the Top 1000. Additionally, the publicaiton "Computational complexity" is placed in the Top 100 among other scientific works published in 2003.
Links to full text of the publication: