A 2D Boltzmann Gas

Gas is all around us, and many of its properties can be modelled by assuming that it is made up of small, hard, spheres which undergo elastic collisions with each other.

Here a further simplification is made: space is considered to be 2D, not 3D.

Suppose one starts with a 2D gas made up of randomly-positioned discs, all with the same initial speed, but with random initial directions of motion. How does this system evolve?

As collisions randomise the motion of the discs, the distribution of their speeds should broaden into a Maxwell-Boltzmann distribution.

In 2D this distribution is:

  mv     ( - mv2 )
  -- exp ( ----- )
  kT     (  2kT  )

Where kT (Boltzmann's constant multiplied by the temperature) is just the average kinetic energy of a disc.

The code used to simulate this system is a little longer than most of the examples on this website, at just over 200 lines. So rather than providing a listing, it is provided simply as

gas.py

As provided, the simulation runs for ever. Closing the window displaying the molecules will stop it.

The above script not only displays the moving discs, but also a histogram of their speeds, which it updates slightly less frequently. Superimposed on the histogram is the theoretical distribution.

Boltzmann, t=0Boltzmann, t=1 Boltzmann, t=2Boltzmann, t=3 Boltzmann, t=5Boltzmann, t=10

Testing

How quickly should we believe these two hundred lines of python? Well, the first version I wrote certainly had bugs...

Firstly the code keeps track of the total energy in the system. Each disc started with a velocity of two, and so an energy, ½mv2, also of two in these units, assuming that the mass is one. The code prints the current energy with the histogram, and it is reassuring that these energy-conserving collisions do not cause it to change. (Note that total momentum is not conserved, unless account is taken of the collisions with the sides of the container.)

Secondly, the code contains a test section. The section initialising the discs starts

if (False):
    m=Molecule()
    m.x=10
    m.y=10
    m.vx=1
    m.vy=1
    molecules.append(m)
    velocities.append(sqrt(2))

[...]

else:

Change the False to True and the system is reduced to one of three discs which is readily solvable without a computer. Look at the computer simulation. Is it correct? It seems to start in a correct periodic state, but watch the simulation until the simulated time reaches 400 (about a minute on a Pi 4).

Is this a problem? Not really. The correct solution to the initial conditions is that the system oscillates forever. However, any slight deviation from that oscillatory state will be magnified. The computer's arithmetic is rarely precise, and this example contains motion at 45° to the axes, so |v| cannot be stored precisely if vx and vy are, and collisions where the line of contact is at 45° to the direction of motion. Very small errors occur, and, after a few cycles, they become very evident.

But when considering the behaviour of a gas, the precise position of individual molecules is not relevant (or known). What matters is the average behaviour of large numbers of molecules. This simulation conserves energy well, and reaches the expected speed distribution. For almost all purposes it will be good enough.

Thermalisation

Thermalisation is the word used to describe a speed distribution moving towards its equilibrium state. Only the (elastic) collisions between the particles cause the distribution to change. So low density distributions take longer to thermalise than high-density ones, and low-speed ones longer than high-speed ones. The above program uses 500 molecules, set in the lines reading:

else:
    for i in range(500):
        m=Molecule()
        cp2=0
# Avoid overlapping molecules
   [...]

Change the 500 to 100, and change the final lines of the code to

    if (abs(t-round(t))<0.5*dt):
        update_stats(molecules,t)
        input("pause")
    t+=dt

so that it waits for the enter key to be pressed after every time t increases by one. Note that whereas with 500 particles thermalisation was mostly complete after five time units, it now takes about 25 time units. Even then it looks less convincing, as a histogram from a hundred particles placed into 31 bins wold not be expected to be as smooth as one from five hundred particles.

A Cautionary Tale

My first attempt to write the code used here followed the below plan:

  1. Set a dt for screen updates, and set the remaining time to this.
  2. For each disc, calculate the time to collide with the boundaries, and with every other disc. Remember the soonest for each disc.
  3. Find the soonest collision time for any disc.
  4. Move all the discs a timestep of the smaller of the soonest collision time, and the remaining time.
  5. Reduce the remaining time by the timestep in the previous point.
  6. If a collision occured, update the velocities of the one or two colliding discs.
  7. If remaining time is not zero, go to point 2.
  8. Update screen
  9. Go to point 1

This is correct, but it gets a little slow. Point 2 involves calculating collision times for all possible pairs of discs, so, for N discs, N(N-1) times need to be calculated (or half that number as the collision between discs i and j is identical to that between j and i). This is fine if N is small, but not if N is 500.

There is an obvious improvement. If a disc suffers a collision, and so its velocity changes, it is necessary to recalculate its future collisions. Indeed, as it remembered only its closest collision, and it has just had that, there is no record of its next collision. But other disks far away are not affected. And, if a timestep ends simply because the screen needs updating and no collision has occured, there is no need to recalculate collisions at all. So we can have:

  1. For each disc, calculate the time to collide with the boundaries, and with every other disc. Remember the soonest for each disc.
  2. Set a dt for screen updates, and set the remaining time to this.
  3. Find the soonest collision time for any disc.
  4. Move all the discs a timestep of the smaller of the soonest collision time, and the remaining time. Reduce the time to next collision for each disc by the timestep.
  5. Reduce the remaining time by the timestep in the previous point.
  6. If a collision occured, update the velocities of the one or two colliding discs and recalculate their times to their next collision.
  7. If remaining time is not zero, go to point 3.
  8. Update screen
  9. Go to point 2

Now at each step we recalculate at most two discs colliding with all others, so 2(N-1) collision times to calculate, not ½N(N-1), and much faster for large N. Faster too at small N, for then it is likely that no collision occurs between screen updates, so no collision times need recalculating.

But this is an optimisation too far. If disc j suffers a collision, we must recalculate the next collision time not only for j, but also for any disc whose next collision was due to be with j. On average this merely doubles the number of collisions to recalculate, and 4(N-1) is still much better than ½N(N-1). The wrong version of the code still conserves energy, and produces the correct speed distribution. It just also produces occasional collisions between discs which are nowhere near each other.