Simulating slime mold with WebGL

Ken Voskuil (mail@kaesve.nl)

A short while ago I attended the Revision demo party, a giant lan party for graphics programmers and other creative nerds. These events are a great source of inspiration and motivation for me, and I always get a lot of energy to work on something new. This time I decided I wanted to write a ‘slime mold’ simulation in WebGL, based on Sage Jenson’s physarium project. Their work is based on a paper that models real organisms, but I was mostly interested in the beautiful visual qualities of their results. The animations they tweet consistently take my breath away. Besides, this project also seemed like the perfect challenge to push my knowledge of using WebGL for simulations.

This page summarises how I approached this project, some of the problems I encountered, and how I solved them. For this article, I’ll assume that you have a background in WebGL or OpenGL, and ideally in using it for simulations as well. However, if you don’t, I plan write a more in-depth article in the future that explains these fundamentals – so keep an eye out (and bug me on twitter to actually do it).

Before I explain my solution, I recommend looking at Sage’s project page, which includes an excellent diagram explaining the algorithm on a conceptual level. In short:

We simulate \(N\) agents, each having an \(x\) and \(y\) coordinate and a \(heading\). We also simulate a ‘trail map’ with a grid with \(m*m\) cells, each containing a value that represents the density of the agent trail at that location. The grid spans the area in which the agents move, and we simulate diffusion and decay of the trail left by the agents on this grid. The simulation algorithm can be summarised as follows:

For each agent:

  1. Use the trail map to determine if the agent should turn left or right, or go straight ahead
  2. Update the heading accordingly
  3. Update the coordinates, moving the agent in its heading direction with a fixed speed
  4. Increase the value in the trail map at the new location of the agent

For each grid cell:

  1. Set its value to a weighted average between it and its neighbors, to disperse the trail density
  2. Multiply by the decay constant

Implementation summary

For my WebGL implementation, I encode the state of the simulation in 4 textures: x coordinates, y coordinates, headings and densities. I create two of each, to ping pong between. I chose to make all the textures the same size. This means that the number of agents is directly tied to the trail map size, and that on average there’s one agent per grid cell. The upside of this constraint is that I do the diffusion, decay and moving the agents in the same step.

Most of the code for this project is WebGL configuration and setup: creating the state textures and initialize them with random data, the programs and a frame buffer. I also create two array buffers:

I split the code up in three WebGL programs;

I gratefully use Javascripts fairly new template string syntax, to include the shader code directly in the rest of my code, and to reuse snippets between shaders. The render program is straightforward, just displaying the trail map. I outline how I use the other two programs below:

Program 1: Update agents and trail map

  1. Bind the offscreen frame buffer
  2. Bind the ping textures to texture units 0-3, to read from
  3. Bind the pong textures to color attachments 0-3, to write to
  4. Load the update program
  5. Set blend function to overwrite
  6. Draw the screen covering triangle. This results in running a fragment shader for each pixel in the frame buffer. For each pixel:

This fragment shader (minus some constants and setup) contains most of the simulation logic and looks like this:


uniform sampler2D        xTex;
uniform sampler2D        yTex;
uniform sampler2D  headingTex;
uniform sampler2D  densityTex;

layout(location = 0) out vec4       xBuf;
layout(location = 1) out vec4       yBuf;
layout(location = 2) out vec4 headingBuf;
layout(location = 3) out vec4 densityBuf;

void main() {
  // Convert from fragment coordinates to texture coordinates
  vec2 uv = gl_FragCoord.xy / SIM_DIM;

  //
  // Read current state from the heading and coordinate textures
  //
  float x = unpack(texture(xTex, uv));
  float y = unpack(texture(yTex, uv));
  vec2 p = vec2(x, y);

  float heading = unpack(texture(headingTex, uv))*TAU;

  //
  // Take samples in the trail map near the agents location
  //
  vec2  leftSampleP = p + SENSOR_DISTANCE*a2v(heading - SENSOR_ANGLE);
  float leftSample  = unpack(texture(densityTex, leftSampleP));

  vec2  forwardSampleP = p + SENSOR_DISTANCE*a2v(heading);
  float forwardSample  = unpack(texture(densityTex, forwardSampleP));

  vec2  rightSampleP = p + SENSOR_DISTANCE*a2v(heading + SENSOR_ANGLE);
  float rightSample  = unpack(texture(densityTex, rightSampleP));

  //
  // Update heading
  //
  if (leftSample < LOW_DENSITY
    && forwardSample < HIGH_DENSITY
    && HIGH_DENSITY <= rightSample) {
    heading += ROTATION;
  } else if (
    rightSample < LOW_DENSITY
    && forwardSample < HIGH_DENSITY
    && HIGH_DENSITY <= leftSample) {
    heading -= ROTATION;
  } else if (
    HIGH_DENSITY <= leftSample
    && forwardSample < HIGH_DENSITY
    && HIGH_DENSITY <= rightSample
  ) {
    // TODO: Use a more random value
    heading += sign(cos(11131.3*gl_FragCoord.x)*cos(7131.3*gl_FragCoord.y))*ROTATION;
  }


  //
  // Update location
  //
  p = wrap(p + SPEED*a2v(heading));


  //
  // Calculate diffused density at the current fragment
  //
  vec3 spacing = vec3(1.0/SIM_DIM, 0.0);
  float diffuse = (
    unpack(texture(densityTex, uv))
    
    + ( // 4/5 + 4/20 = (16 + 4)/20 = 1
      0.2*(
        unpack(texture(densityTex, uv + spacing.zy)) + // north
        unpack(texture(densityTex, uv + spacing.xz)) + // east
        unpack(texture(densityTex, uv - spacing.zy)) + // south
        unpack(texture(densityTex, uv - spacing.xz))   // west
      )
      + 0.05*(
        unpack(texture(densityTex, uv + spacing.zy + spacing.xz)) + // ne
        unpack(texture(densityTex, uv - spacing.zy + spacing.xz)) + // se
        unpack(texture(densityTex, uv - spacing.zy - spacing.xz)) + // sw
        unpack(texture(densityTex, uv + spacing.zy - spacing.xz))   // nw
      )
    )
  )/2.0;


  //
  // Write to textures
  //
        xBuf = pack(p.x);
        yBuf = pack(p.y);
  headingBuf = pack(wrap(heading/TAU));
  densityBuf = pack(DECAY_RATE*diffuse);
}

Program 2: Deposit

  1. Load the deposit program
  2. Bind the pong \(x\) and \(y\) textures to texture units 0 and 1
  3. Bind the pong density texture to color attachment 3
  4. Set the blend function to GL.FUNC_ADD
  5. Draw \(N\) GL.POINTs with the agent UV buffer. In the vertex shader, which runs for each vector in the buffer:
  6. The fragment shader outputs a constant value based on the deposit rate, which is added to the current density value at the agents position

Aside from working through the usual WebGL struggles, I wanted to highlight two of the problems that took me the most effort to solve.

Encoding and decoding the state

When you’re using WebGL for anything other than rendering triangle meshes, the first problem you want to tackle is how you encode program state. In this simulation, the state consists of the agent positions and headings, and the trail densities. Since we want our WebGL program to update these values, we need to encode the state into textures.

The first approach that comes to mind is using a single texture. Every pixel would store one agent and one trail density value. This seems to work out great, because a pixel (normally) consists of four channels: rgba. We could store the three properties of the agent and the density each in a channel and be done! Unfortunately, there’s a small snag; these channels are stored as 8b numbers. This means that each channel can take on 256 different values – maybe enough to encode the trail density and heading, but way too coarse to encode the position.

One solution that I tried to make work is using different texture formats. The documentation for texImage2D shows you can choose the data representation and format for textures. The option I want to use is RGBA32F, which would store each channel as a 32b number. Unfortunately, I was not able to get this to work. I’m not sure if this is a limitation of my laptop, if it is not possible to write to floating point textures using WebGL2, or if I am just missing steps/misconfiguring. But after trying for an hour or two, I gave up.

In the end, I chose to use separate textures instead of channels to store the state, one RGBA8 texture per property. We can use the same layout for each texture, eg. the same pixel in each texture encodes a property for one agent/cell. With WebGL2 (or using an extension in WebGL1) it is possible to write to several buffers or textures from your fragment shader, so this solution can still update all the properties in the same program. There is still one more hurdle to clear though. One pixel in a RGBA8 texture can store 4*8 = 32 bits of data, which is more than enough precision. However, there is no convenient way to read a pixel as a single value. Instead, when we read a property from a texture in our fragment shader, we get a vec4 with a value between 0-1 for each channel. WebGL normalizes the channel values for us, by dividing them by 255. To work around this, we can ‘unpack’ the vector after reading it, and then ‘pack’ it again before we write it out. I happened to find exactly these operations while I was digging through the Three.js shader source code, so that’s what I used:

// Taken from https://github.com/mrdoob/three.js/blob/2ad018c7c7ad103c9da9d7f22b3154537384ab26/src/renderers/shaders/ShaderChunk/packing.glsl.js
// MIT Lisence: https://github.com/mrdoob/three.js/blob/2ad018c7c7ad103c9da9d7f22b3154537384ab26/LICENSE
const float PackUpscale = 256. / 255.; // fraction -> 0..1 (including 1)
const float UnpackDownscale = 255. / 256.; // 0..1 -> fraction (excluding 1)
const vec3 PackFactors = vec3( 256. * 256. * 256., 256. * 256.,  256. );
const vec4 UnpackFactors = UnpackDownscale / vec4( PackFactors, 1. );
const float ShiftRight8 = 1. / 256.;

vec4 pack( const in float v ) {
    vec4 r = vec4( fract( v * PackFactors ), v );
    r.yzw -= r.xyz * ShiftRight8; // tidy overflow
    return r * PackUpscale;
}
float unpack( const in vec4 v ) {
    return dot( v, UnpackFactors );
}

Deposit

In my implementation, I split updating agents into two programs: updating the location and heading, and depositing onto the trail map. If you were writing this in normal Javascript, this would seem like a weird and somewhat arbitrary separation, but it is necessary to implement the algorithm in WebGL. For the first part of the update, I run a fragment shader for all \(N\) pixels in the state textures. Say we are running the fragment shader for some pixel at \(p_{pixel}\). We get the agent position \(p_{agent}\), and can read from the trail map using that position. The fragment shader can then output the updated values for this agent, as outlined before. The problem is, there is no way to write to a different pixel from the fragment shader. This means that even though we can read the density at \(p_{agent}\), we can’t increase it – we need a different solution.

The deposit program uses a different render strategy. Instead of ‘drawing’ a giant screen covering triangle and putting all the logic in the fragment shader, I draw a GL.POINT for each agent, passing the vertex shader the agents UV coordinates as an attribute (it might have been better to generate these on the fly, in the vertex shader itself). The vertex shader uses the UV to output the agents position, together with gl_PointSize = 1.0. This should result in running the fragment shader for the one pixel where we want to increase the trail value.

I used another clever trick at this point, but while writing this article I realized it creates problems, and that I probably didn’t need it in the first place. I was trying to avoid reading and writing to/from the same trail map texture. This step runs after running the diffusion and decay, so I need to read from the resulting texture. The way I set up the code, this was the ‘output’ texture and I wanted to also write the result of the deposit step to it. As a work around, I came up with the idea of using blend modes instead of reading the current value and increasing it in the fragment shader. By setting the blend mode to GL.FUNC_ADD, the fragment shader could just output a constant value, which WebGL would then add for us to the current value.

This sounds fine, and even seems to work in practice, but because of the way I encode the values in the textures, the blend function will produce wrong results when a channel overflows. I could have solved this problem with better use of my ping pong textures, but I a similar issue would still persist when an agent isn’t exactly at a pixel coordinate. WebGL will bilinearly interpolate values when you read or write to/from between fragments. This interpolation is per channel, which means it will produce wrong results with our encoding scheme. Getting RGBA32F textures to work should make these problems would disappear.

Results

In the end, I was able to write a (mostly) working simulation during my time at Revision, almost completely from scratch. I’d like to thank Sage Jenson again for their enlightening diagram, which made it possible to implement my own version so quickly.

My laptop can run the simulation at ~30fps simulating and rendering 4M particles. There is definitely still room for improvement: fixing the bugs I mentioned, porting it to WebGL1 for wider support, decoupling the trail map size from the number of agents and optimizing performance, to name a few options. Besides these technical improvements, I’m also inspired to modify the simulation rules to create different behaviors. I already tried controlling the simulation with bitmaps, leading to some cool results. Some other ideas I definitely have to try out are using it in combination with reaction diffusion or fluid simulations and adding user interactions. Besides Sage Jenson, @ojelibalon is also working on these kinds of animations and definitely worth checking out.

If you enjoyed reading this, or have questions, suggestions or improvements please let me know on twitter or through email.