A Fixed Point Mandelbrot Set

Generally one uses real, i.e. floating-point, numbers in programs without worrying about the details. When one does start worrying, one finds that most languages offer one, or both, of the two main floating-point formats described by IEEE 748. In practice this means that one has either a little over seven digits of precision, or almost sixteen. Whereas C and C++ offer both, python offers only the higher precision version (although the lower precision is available if one uses numpy).

For most purposes 16 digits is plenty, but when zooming into the Mandelbrot set one can soon run out of precision. Are there alternatives?

Unlike most languages, python supports arbitrarily long integers; integers simply grow as required. This suggests one possible solution. Suppose we store and process our co-ordinates as integers, but first scale them by multiplying by a large integer. For example, if we chose the scale factor to be 1000, then the value 0.034 would be represented as 34, and -1.25 as -1250. We have invented a system with just three digits of precision after the decimal point.

This sort of system is called "fixed point" because the number of digits after the decimal point never changes. In the usual floating point conventions, it does. IEEE 748 single precision has about seven digits of precision. It cannot distinguish between 12,345,678 and 12,345,679, whereas our three-digit fixed point system could easily. On the other hand it can distinguish between 1.234 56 and 1.234 57, which our fixed point system cannot.

The Mandelbrot set is ideal for fixed point arithmetic, as everything interesting happens for numbers whose modulus is less than two. So how can one write a fixed point algorithm?

For addition an subtraction it is very easy. When we wish to calculate x+y, we need to remember that we will store Sx and Sy where S is our scaling factor, and we wish the result to be S(x+y). As Sx+Sy=S(x+y) there is nothing to do; it has worked as we wished.

Multiplication is slightly harder. When we wish to calculate x*y, we need to remember that we will store Sx and Sy where S is our scaling factor, and we wish the result to be S*(xy). As Sx*Sy=S*S*x*y, we have a slight problem. Every time we perform a multiplication we must immediately divide by S.

If we were programming something working in base 10, we would make sure that S is a power of ten, for dividing by powers of ten is trivial. Hence the choice of 1000 in the above example. Computers storing integers in binary find dividing by powers of two to be trivial. Again one simply shifts the digits to the right as a short-cut instead of performing some complicated long division. So we will choose S to be a power of two, and consider it as a number of bits to shift by dividing.

bshift=6

def mandel_column(x_in,ymin,ymax,size,maxiter):
    global bshift
    bscale=int(2**bshift)
    col=np.zeros(size)
    x0=int(x_in*bscale)
    y1=int(ymax*bscale)
    for j in range(0,size):
        y0=y1-int((j*bscale*(ymax-ymin))/size)
        i=0
        x=x0
        y=y0
        x2=(x*x)>>bshift
        y2=(y*y)>>bshift
        while (i<maxiter)and(x2+y2<4*bscale):
            y=2*((x*y)>>bshift)+y0
            x=x2-y2+x0
            x2=(x*x)>>bshift
            y2=(y*y)>>bshift
            i=i+1
        col[j]=1-i/maxiter

    return col

(This can be compared to the code for the Mandelbrot set with real arithmetic.)

The operator >> shifts the bits of its left-hand argument by the number of places given by its right-hand argument. The expected multiplication by two in y=2*x*y+y0, or, as we must write in the fixed point code y=((2*x*y)>>bshift)+y0 is accounted for by reducing the bit shift by one.

Try downloading mandel_tk.py, replacing the definition of mandel_colum as above, and running it for various values of bshift.

Note that it is very slow, and we are not able to use numba to speed things up. Numba changes some of python's rules subtly, and one of the changes is that integers are no longer arbitrarily long. They are 32 bits on 32-bit platforms, and 64 bit on 64-bit platforms. How big do we need? The largest number we will have to deal with occurs when x or y is just under two, and then gets squared on the next iteration. Just before the shift which rescales the result of the multiplication, it will be about 4*S*S. Adding an extra bit for the sign suggests that this will need 2+2*bshift bits. So one can use numba if bshift is kept <15 on 32-bit platforms, and <31 on 64-bit platforms.

Not ideal. In theory we can zoom in to see greater detail with this code, but it is unusably slow without numba, and with numba, even on 64-bit platforms its precision is worse than python's standard floating point precision.

And not only is it slow, but all the routines for input and output still use double precision, so we cannot actually zoom much further than the previous code. We need to convert everything to fixed point, and that is tedious.

But we can improve on our simple fixed-point Mandelbrot code, perhaps by a factor of two or more in speed. This is discussed under optimising the fixed point Mandelbrot Set.

Technical note for the confused

In most languages >> performs a simple bit shift which is equivalent to division for positive numbers, but not for negative ones. Such a shift is called a logical shift. In python the shift is equivalent to division, even for negative numbers, as it stores integers in the form of sign and magnitude, whereas most platforms use "two's complement". Some languages, such as C, do not define precisely how right shifts work on negative numbers. They might be logical (what we don't want), or they might be arithmetic (what we do want).