# Series Stability

Consider the following two series:

### Series A

```  xn+1=4xn(1-xn)
```

### Series B

```  xn+1=2xn(1-xn)
```

If we start each series with x1=0.52, what is the 52nd term?

This sounds like a fairly straight-forward exercise.

seriesA.py

```#!/usr/bin/python3

x=0.52

for i in range (1,53):
print(i,x)
x=4*x*(1-x)
```

seriesB.py

```#!/usr/bin/python3

x=0.52

for i in range (1,53):
print(i,x)
x=2*x*(1-x)
```

Having made the above files executable with

```  chmod +x seriesA.py seriesB.py
```

we can run them as "`./seriesA.py`" and "`./seriesB.py`". The output should be something like:

Series A

```1 0.52
2 0.9984
3 0.0063897600000001826
4 0.025395723868570322
5 0.09900312431104659
6 0.35680602275079215
[...]
50 0.34260639033414503
51 0.9009090065454101
52 0.3570878738830894
```

Series B

```1 0.52
2 0.4992
3 0.49999872
4 0.4999999999967232
5 0.5
6 0.5
[...]
50 0.5
51 0.5
52 0.5
```

Job done?

## We have a problem

The correct answer for series B is indeed 0.5 (or, arguably, 0.49999..., but the number of 9's doubles on each iteration, so it is really quite close to 0.5).

The correct answer for series A is approximately 0.0001, and certainly nowhere near 0.357.

```#!/usr/bin/python3
import decimal

decimal.getcontext().prec=40
x=decimal.Decimal(52)/decimal.Decimal(100)

for i in range (1,53):
print(i,x)
x=4*x*(1-x)
```

The above routine uses python's standard high-precision maths package, decimal. The line:

```decimal.getcontext().prec=40
```

sets the number of decimal digits of precision to use, and the line

```x=decimal.Decimal(52)/decimal.Decimal(100)
```

carefully sets `x` to 0.52 to that precision. One should not fall into the trap of trying

```x=decimal.Decimal(0.52)
```

for that evaluates 0.52 as a standard double precision number, and then converts that to a high-precision 40 decimal digit number. The problem is that a standard double precision number is stored as a binary fraction, and whilst 0.52 can be stored exactly with just two decimal digits, as a binary fraction it recurs forever, just as one seventh would expressed as a decimal number (or, indeed, a binary one). The small integers 52 and 100 can be stored exactly using a small number of binary digits: 1101002 and 11001002. The binary representation of 0.52 is

```0.1000 0101 0001 1110 1011 1000 0101 0001 1110 1011 1000
```

where the overline indicates the infinitely-repeating sequence.

How do we know that our new code is correct? We can run it for different values of precision, and observe how the final term changes. Precision vs 52nd term are tabulated below:

```  14  0.24979965732246
15  0.00008050219746
16  0.00006594684318
17  0.00006629687479
18  0.00008699579715
20  0.00008543817532
25  0.00008542316204
30  0.00008542316220
35  0.00008542316220
```

And note that had we (incorrectly) used `x=decimal.Decimal(0.52)` we would conclude that the answer is about 0.000947690, wrong by more than a factor of ten!

## A Warning

In general computers are marvellous for maths. But, just sometimes, they can be dangerously wrong. To me it is not immediately obvious that the two series presented here are so very different: both are simple quadratic expressions. Of course one can analyse their stability, but who would bother?

## Further Thoughts

Thirty decimal digits of precision were sufficient to reach the 52nd term of this series. What if we had wanted the hundredth, or the thousandth, term?

Standard double precision arithmetic is fast. How much slower is this special high-precision arithmetic? Try removing the print statement from the loop, and just printing `i` and `x` after the final iteration. Increase the loop count to a million, and try timing the original double precision code, the high precision code with a precision of 40 decimal places, and the high precision code with a precision of 100 decimal places.

It may be useful to have the Pi time itself, by using commands such as:

```  \$ time ./seriesA.py
```