Simulated ecosystems

3 Dec 2018

Closed ecosystems

The type of ecosystem I want to simulate is a closed ecosystem. This means nothing enters or leaves the system from the outside world. In this way, ecosystems can never lose all their contents and die as a result, and organisms inside it cannot cheat by giving themselves something that didn't exist in the ecosystem before.

Figure 1: The closed ecosystem in my windowsill.

As an experiment, I have created such a system myself a while ago. Figure 1 shows the system in its current state. It's a simple mason jar filled with some gravel, water and plants from a nearby lake. At first, the water looked dirty and green and the plants didn't look too healthy. After a while though, the water became clear and plants started to root in the gravel. It was then I could also see the animals inhabiting the system; a small number of snails were crawling along the glass and hundreds of tiny crustaceans were swimming around and feeding on the plants.

Because the jar has an airtight seal, no matter ever enters or leaves the system after it has been closed. Sunlight is the only exception. The plants and algae use the sunlight as an energy source and produce biomass and oxygen as a result. The animals consume the plants and the oxygen, and their droppings and bodies will eventually be a food source for the plants again. As long as energy is provided to the system through sunlight, this cycle could theoretically last forever, and my small ecosystem has indeed been stable for a long time now.

Making a model

After observing both the theory of a closed system and the practice of the jar in my window, I have defined a list of requirements for the simulation in which I want to capture the system:

  • The system must have a constant mass.
  • The system exists in a limited space.
  • Mass is distributed over two categories:
    1. Living mass, consisting of plants and animals.
    2. Dead mass, consisting of organism remains and droppings.

This list is a bit too brief to base a digital simulation on, so I've also made some choices regarding the implementation:

  • Mass will be a whole number, and every organism has mass. I use whole numbers to avoid violating the first requirement through rounding errors.
  • The simulation takes place on a hexagonal grid. I favor this shape over squares for the following reasons:
    1. All neighbors are at equal distance from a hexagons center.
    2. Hexagons have six neighbors instead of eight. This saves some performance.
  • Every tile has a certain amount of dead mass (I call it fertilizer), and at most one agent (a plant or animal).
  • Agents can consume both fertilizer and other agents. However, agents can only eat other agents if they are at least twice as heavy as their prey.
  • Agents can perform a fixed number of actions, and every action reduces an agent's mass by an amount that is proportional to its mass. The reduced mass is then dropped as fertilizer.

The last two points are most important when it comes to implementing the agents. Because the number of possible actions is fixed, we leave no room to cheat. Every organism can perform the same actions, nothing more or less. Every action then has an energy cost which is determined globally and is proportional to the agent's mass. Moving to a neighboring tile for example could have a cost of 5%. This means that an agent with a weight of 100 grams would lose 5 grams because of this action, and drop 5 grams as fertilizer on the tiles around it. A bigger organism of 1000 grams would lose 50 grams however, so bigger organisms will need to eat more as well. On the other hand, an organism needs to be significantly heavier than its prey to be able to eat them.

Idle1%Do nothing. If any of the subsequent actions fails, this action will be performed.
Move10%Move to a neighboring tile.
Eat fertilizer0%The agent eats a certain amount of fertilizer from the ground.
Eat agent0%The agent eats a neighboring agent. The neighbors' mass is added to the agent.
Copy10%Copy the agent to a neighboring cell. The agent keeps half of its mass, and the new agent will get the other half.
Die100%The agent dies.
Figure 2: All possible actions agents may perform.


Figure 2 contains a table containing all possible actions agents can perform. The mass cost is also determined. The agent should be able to show some intelligence in its actions. To make informed decisions, the agent should be able to sense its surroundings. For this, a context is provided. The context contains the following information:

  • All neighboring agents.
  • The amount of fertilizer in the tile the agent is on.
  • The accessibility of the tiles around the agents.

A single function is responsible for an agent's behaviour. This function takes a context as an argument, and returns an action. If this action turns out to be impossible (for example if an agent tries to move to a blocked tile), the idle action will be performed instead. Because of this, every agent will perform some action whenever the simulation is updated.

Finally, every agent must have a minimum mass. If an agent falls below that mass, the agent will die. You might think entering the lowest possible mass would be cheating here. It turns out this is not the case, but more on this later.

Plants and rabbits

I have implemented the simulation described above. To demonstrate it, I have also implemented two rather simple agents.

The plant consumes fertilizer from the ground at a slow rate, or idles when not enough fertilizer is available. When a certain mass threshold is reached, the plant tries to copy itself to a free neighboring tile.

The rabbit looks in a certain direction and eats a plant if it sees one. When a plant is eaten, the rabbit continues to move in that direction hoping to find more plants. When no plant is found, the rabbit moves around randomly. If the rabbit's mass gets very low, the rabbit tends to idle to conserve energy.

The more mass a tile has, the darker its color will be. When no plant or rabbit is on a tile, the amount of fertilizer is shown through the color of the tile.

While the simulation runs, a graph is generated below it. This graph shows what makes up the biomass of the entire system. Most of it is gray, this is the dead biomass or fertilizer. A part of it consists of the organisms in the legend, plants and rabbits in this case. Usually, the course of the simulation follows a predictable pattern:

  • The plants grow quickly, because there are only a few rabbits eating them.
  • Because there is more food, the population of rabbits grows too.
  • The number of plants decreases because of the eating rabbits.
  • The number of rabbits decreases because there are less plants.

This process repeats infinitely, or until an unfortunate twist of fate causes the rabbits to go extinct. After initialization, the situation is very random and unstable. At first, big shifts in populations occur often. After a few waves, the simulation seems to become more stable. This is similar to the early behaviour of my ecosystem in a jar: the water quality and animal populations showed a few peaks and valleys before finally stabilising.

I had to adjust the parameters quite a bit to make a balanced ecosystem where multiple species continuously exist. Firstly, the plants should multiply much faster and be less massive than rabbits to allow the rabbits to eat enough of them. Secondly, the rabbits shouldn't reproduce too quickly, and they should die when there is not enough plant material to feed them. In fact, all agents should die as soon as their chances of survival become too small to make sure their mass is assimilated into the ecosystem again, because others need it to survive.

I have published the code for this simulation on GitHub under the MIT license.


The simulation above is quite similar to the jar it's based on when we look at the ingredients. There are plants and herbivores. In bigger more realistic ecosystems, there are usually also predators. While I have learnt this is not sustainable in my jar (diving beetles, dragonfly larvae and amphipods make for bad roommates), I have modelled a fox to hunt down the rabbits in the digital simulation below.

The foxes need to be heavier than the rabbits to be able to eat them. They also need to be a fair bit smarter than their prey to be able to catch them. Additionally, they will eat rabbits less often than rabbits eat plants, because the number of rabbits in the simulation is low compared to the number of plants. This means foxes need to spend their hard earned mass sparingly. I made a fox with the following behaviour:

  • Foxes are omnivores, so they will eat plants when hungry.
  • Meat is a part of their diet, so they will only reproduce when eating meat. More precisely, they only eat plants when hungry, and thus they can only reach the size at which they reproduce by eating rabbits.
  • Foxes are less active than their prey. Instead of wasting energy chasing them, they idle more and eat a rabbit when they see them.
  • Because they are smarter, foxes don't move randomly; when they see food, they will always eat it (preferring meat), unlike rabbits who may walk past food sources if they do not see them.

The simulation with foxes was harder to balance out, and it's a bit prone to extinction events. Generally speaking, the balance between the species is more stable than it was in the previous simulation. The foxes prevent the rabbit population from "exploding", and more plants remain alive as a result. If the rabbits do manage to destroy most forests, the foxes become vulnerable because they can't keep themselves alive with plants while the rabbit population rapidly declines because of the deforestation they caused.


Implementing these simulations was an interesting exercise. It is noteworthy that the ratio of the organisms over time (especially in the first simulation with rabbits and plants) strongly resembles the Lotka-Volterra equations which are used to describe such biological systems. The simulations seem to be usable models for ecosystems.

Currently, agents do not have a memory at all, so the awareness of their surroundings is very limited. Mapping recently visited tiles or being able to walk back to a previously visited spot could give rise to more succesful strategies.

The agents have been "hard-coded" by me and do not change their behaviour over time. An interesting extension would be evolving or adapting behaviour. In the implementation, every agent has a copy function. This is currently more of a placeholder, but it can be used to let agents inherit behaviour from their parent. Agents could then slightly randomize their tactics, and the best tactics would become dominant through reproduction.