Think of cells as a red&blue checkerboard in your mind. On alternate
steps, update the red cells and the blue cells.
To update a cell, flip it if it has an odd number of neighbors of each color.
This rule is reversible.
I can't find a nice stable (robust) background.
Indeed, if we have a background of period 1, any disturbance spreads at light speed. So a robust background will have to be vibrating. But even so, the nature of the rule is that a slight disturbance will propagate. So I don't think we're going to be able to construct things very easily.
Let's look on the other hand -- it behaves fractally, so perhaps there's a way to predict it quickly?
Hey, "flip if odd" is the same as "add your neighbors to yourself". So a site's value (if it just got updated) is the sum of the five sites forming a + on it.
So this is an additive rule, meaning that the behavior from a given state is just the linear sum mod 2 of the independent behaviors from all the individual "1" cells.
Each of the five sites contributing to a cell's value (except the center) is itself the sum of a bunch of sites from the step before that (the center is just its old value since it didn't get updated then), so the contribution to the center cell from two steps ago is:
1 2 1 2 1 1 5 1 1 2 1 2 1And since this is all mod 2, that's just a big plus shape.
But anyway, this process, we can see, is exactly the same as seeing what a cell's contribution is to its neighbors, evolving forwards in time. So the pattern of contributors (from t steps ago) to a cell's current value (if it just got updated) is exactly the same as the pattern generated starting with just that one cell being on and then evolving for t steps (starting with the cell's neighbors).
This means to compute a cell t time steps in the future, we take the image of a single "1" after t time steps (which can be computed quite quickly due to the simple fractal behavior) and count up how many of the resulting "1" sites are on.
. * . * . * . * . * A * B * A * B * . * . * . * . * . * B * A * B * A * . * . * . * . * . * A * B * A * B * . * . * . * . * . * B * A * B * A * . * . * . * . * .Suppose that in the diagram, all the "." cells are off (white), and all the "*" cells are the same color as the neighboring A cell. If it's now time to update the "*" cells, then they'll only change color if the neighbor A and B cells are different. This means that in any case, whether or not they flip, they'll wind up the same color as the neighboring B cell.
On the next step, the A, B, and "." cells get updated. Since every "*" is the same color as the neighboring B, the four "*" neighbors of a "." will always have an even number of each color, so the "." cells won't change. Similarly, the B cells won't change. And finally, an A cell will change iff an odd number of its four B neighbors are set.
So now we're back to the original situation, but with the roles of A and B reversed. So after these two steps, the A's were updated by flipping them if an odd number of neighboring B's were on, and the B's stayed the same. And now, the "*" cells are all the same color as the nearest B, so after the next two steps, the B's will get updated.
See what's happening? The rule is simulating itself! If we consider only the A and B cells, and do two steps at a time, then it looks like the rule itself is being executed on the AB subgrid!
But how often are the weird conditions on the "." and "*" cells met? Well, they are met in the case of just a single cell being black!
So if we start at generation 0 with a single black cell that's just been updated, then we can take generation n, blow it up by a factor of 2 to get the A and B cells of generations 2n and 2n+1, the only difference between which is whether the "*" cells are the same as the A cells or B cells. If the center cell is a B cell, then the "*" cells match the B cells on steps that are 1 or 2 mod 4.
So starting from a single cell on at (0,0) at time steps -1 and 0, if we want to find the value of a cell at (x,y) at time step t, we can operate on the low order bits of (x,y,t) to quickly find that value:
So, given any configuration, to predict a certain cell t time steps in the future, we can have a bunch of parallel processors (one for each cell) each figure out whether their cell will affect the result, (this takes log(t) time), and then we can sum all the cells that will affect the result (this again takes at most log(t) time, depending on the architecture), and so we can figure out the final answer in O(log(t)) time.