All those files in your computer. What are they exactly? Well,
they are just approximations of that wonderful number,
. Some are
good, some are bad, but each file is an approximation. What did
I smoke? Nothing... Let me explain.
It is well known that the probability of two integers being coprime is
6/
2. It can even be proved.
Now, if a file is structured as a list of lines, each line can be
seen as a number (a field of bits being a number in base
two). Considering the probability for each line to be coprime with the
next line, any file provides an approximation of
.
All you have to do to obtain the approximation for a given file is
to compute the gcd of quite large numbers.
Here is a Perl script that
computes the
number of a given file.
Of course, the larger and the more random a file is, the better
the approximation. For instance, the
number of my
Ph.D. thesis (uuencoded Postscript) is 3.302 (but I am currently
writing a research paper with a
number (at present) equal to 42.001).
The kernel of windoze 98 gives the outstanding approximation of
3.14159265358979323846264338327950. This proves that this program is
NOT randomly generated (as many people seem to
believe). With a pseudo random (rand48) 10 GB/41943040 lines
file (and an optimized C/gmp program to compute the
number instead of the Perl script), I only got 3.141543785 (and a $1,000 fine from Caltech for wasting
56 hours of CPU time...).
The current
record (00/02/18) in the computation of
is
206,158,430,000 digits. What
is funny, is that it only took 37 hours and 21 minutes. However, I
don't know what random file they used...
So, what's the best
file on your machine?
>
Last modified: Wed Nov 29 08:45:02 EST 2000