Iterated logarithm

From Wikipedia, the free encyclopedia

In computer science, the iterated logarithm of n, written log*n (usually called "log star" in conversation), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1. The simplest formal definition is the result of this recursive function:


  \log^* n :=
  \begin{cases}
    0                  & \mbox{if } n \le 1; \\
    1 + \log^*(\log n) & \mbox{if } n > 1.
   \end{cases}

or, in pseudocode:

function iterLog(real n)
    if n ≤ 1
        return 0
    else
        return 1 + iterLog(log(n))

or, not recursive pseudocode:

function iterLog(real n)
    i = 0
    while n > 1
        n = log(n)
        i = i + 1
    return i

or, on the positive real numbers, the continuous super-logarithm (an inverse function of tetration) definition is equivalent:

\log^* n = \lceil \text{slog}_e(n) \rceil

but on the negative real numbers, log-star is 0, whereas \lceil \text{slog}_e(-x)\rceil = -1 for positive x, so the two functions differ for negative arguments.

Figure 1. Demonstrating log*4 = 2
Figure 1. Demonstrating log*4 = 2

In computer science, lg* is often used to indicate the binary iterated logarithm, which iterates the binary logarithm instead. The iterated logarithm accepts any positive real number and yields an integer. Graphically, it can be understood as the number of "zig-zags" needed in Figure 1 to reach the interval [0, 1] on the x-axis.

Mathematically, the iterated logarithm is well-defined not only for base 2 and base e, but for any base greater than e^{1/e}\approx1.444667.

The iterated logarithm is useful in analysis of algorithms and their computational complexity, appearing in the time and space complexity bounds of some algorithms such as:

The iterated logarithm is an extremely slowly-growing function, much more slowly than the logarithm itself; for all practical values of n (less or equal than 265536, which is far more than the number of particles in the universe), even the iterated logarithm to the base 2 is less or equal than 5.

x lg* x
(-∞, 1] 0
(1, 2] 1
(2, 4] 2
(4, 16] 3
(16, 65536] 4
(65536, 265536] 5

Higher bases give smaller iterated logarithms. Indeed, the only function used in complexity theory that grows more slowly is the inverse of the Ackermann function.

The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. It is also proportional to the additive persistence of a number, the number of times one must replace the number by the sum of its digits before reaching its digital root.

[edit] References

[edit] See also