Mark Gritter (markgritter) wrote,
Mark Gritter

Algorithms that only need to be executed once

Charles Babbage knew from personal experience that mathematical tables are hard to create. He and William Herschel discovered error after error in published books. Even if you get all the calculations right, the odds are pretty good that there will be typesetting errors.

Because his whole motivation for the Difference Engine was to perfect this process, the design reflects an appreciation for unexpected errors. The engine prints its own output, rather than relying upon manual transcription. A second copy of the output is printed in a log, which allows checking the final copy for agreement. The mechanisms of the calculation operate in such a way as to jam rather than produce an incorrect sum.

One can think of the difference engine is a mechanism designed for running its algorithm just once. After you build a table of logarithms, you don't need to do it again. Of course, the machine is configurable to produce a variety of tables. And it's restartable in case a page is damaged. But the key difference is that correctness takes precedence over throughput or flexibility.

What's striking to me is how few examples of such run-once algorithms are of practical use today. Mathematical models generally don't consult pre-computed tables, we just run an algorithm that calculates the answer on demand. A lot of the problems that are interesting to us are online problems where new data shows up all the time.

Verification tools for circuit design are usually run *successfully* only once. However, they are often run multiple times on nearly the same input, because the whole point is to find and fix bugs.

An example from number theory is verification of the largest Mersenne prime. In that case, rather than writing one really solid algorithm, the number was verified on multiple implementations of primality testers! I don't know what Babbage would think of that.

So, what sort of algorithms today are run only once on a given problem? Admittedly this is a fuzzy definition because, say, Google never indexes the same Internet twice. And the circuit-verification example is technically different each time.
Tags: algorithm, computers
  • Post a new comment


    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 1 comment