An int128 Mandelbrot Set
This code currently requires a 64 bit verison of gcc or clang
for the non-standard
__int128 type. So it won't work
on Pis running a 32 bit OS.
The example of using GMP to create a fixed point Mandelbrot Set is flexible, in that it will zoom without limit, but it is slow.
Most 64-bit CPUs support a 128 bit datatype, and addition, subtraction and bitshift operations on it. Furthermore, they support the multiplication of two 64 bit numbers such that the result is 128 bits (rather than only the "low" 64 bits being returned, as one would expect in C, Fortran or Java).
If we use 64 bit integers for most of our variables, but for the immediate results of multiplications we use 128 bit variables, we can support a bit shift of up to about 60. This gives a range of -8<x<8 for our usual variables, and no overflows on multiplications. Comparisions with double precision floating point are not completely clear, for our format has 60 binary digits in the factional part, whereas floating point has just 53, but leading zeros in the fractional part do not count towards this total. Conversely, any bits needed for the integer part reduce the bits available for the fractional part for the floating point format.
The good news for uses of MacOS and Windows is that this approach does not need the GMP library. The bad news is that it relies on a common GCC extension, but one supported by MacOS too.
The C is much simpler, and we do not need to write our own function
for converting a PyLong object to our desired integer, as python
provides a function for conversion to
whereas it did not for converting to a GMP
that one cannot
guarantee that C's long is 64 bits.
Again to compile one has to download the three files above and then type
python3 setup.py build_ext --inplace
to build the module, before running the program as
The good news for all is that this approach is about a dozen times faster than using GMP. It may only give half a dozen or so extra bits of precision, but, if that is sufficient for one's needs, then it is a good solution, as it is only a couple of times slower than the double precision code.
The example given does not constrain the bit shift to be no more than 60. I believe that 61 works, but larger values quickly give entertainingly wrong answers.
When the Mandelbrot set first became popular in the late 1980s, most home computers did not have dedicated hardware for floating point operations. Floating point operations had to be emulated in software using integer instructions, and were thus very slow. An early, popular, and fast, program for exploring the Mandelbrot set was called Fractint, and used fixed point maths as much as possible. Computers then were 32-bit, but assuming one can multiply two 32 bit numbers to obtain a 64 bit product efficiently, then a bit shift of up to 28 will certainly work, which is sufficient for modest zooms into the set.
Beyond Double Precision in Action
The coordinates used above were
(0.4048772675870179731+0.1454640909728626375i) with maxiter=10000 and the jet colour map. The first image uses double precision arithemtic, the second the fixed point code on this page with a shift of 61. One can even click on the second for a full 2000x2000 pixel image. The rectangular nature of the blocks in the first image is explained on the theory page.
The fixed point code produces a similar quality to the double precision code, but with the pixels remaining square, if the bit shift is reduced to 55. This is as expected, as the fixed point code will have 55 binary digits after the binary point; the double precision code will have 53, but leading zeros do not count towards this total. In the imaginary direction, the co-ordinates are around 0.14, which, in binary, starts 0.001, with two leading zeros after the binary point.