For the DivergencePlot, we show cluster sizes as a function of the ENC, the Expected Number of Connections.

To try to line up two graphs (like squares vs. circles), we aim for the least squares differences in height among the regions that overlap. But I don't see how to do that efficiently. So I will use the following hack: Consider only the graph for the largest cluster, and when comparing two such graphs, consider only the parts of the graphs that overlap. Now, join them to just be one single graph. If the graphs were not lined up, the new graph will be very jaggy, bouncing between the values in one graph and the values in the other. If the graphs were lined up, the new graph will be much smoother. So the hack is to just look at the sum of the squares of the differences between consecutive points in the new graph. Where this has a minimum, the graphs can be considered as lined up.

For each experiment, that showed us two graphs.

The first graph plots the size of the two largest clusters against each other. The x-axis is the size of the largest cluster and the y-axis is the size of the second largest cluster.

The second graph plots the sizes of the two largest clusters as a function of the Expected Number of Connections that a given point has.

That illustrates the process we will use in trying to match up two graphs. The fitness function as described in the "Code" section above is shown in black in the upper graph over a wide range of possible scaling factors. The dip in the center corresponds to the local minimum we are searching for. The dips on the sides correspond to the overlap area becoming very small, which also causes our error estimate to be reduced.

To find the local minimum, we approximate the curve by a quadratic. In the picture above, the quadratic shown in red is totally meaningless, because we included too much data far away from the local minimum. If we were to limit the quadratic-fit data to the data near the minimum (which we can pick by eye), then the red quadratic fit would be good, and its center, shown as a blue vertical line, would be a good estimate of the right scaling factor. In this case, the red fit is bogus, and the vertical blue line winds up to the right of the minimum, yielding a too-large scaling factor, and we see that in the second graph the blue divergence graph has indeed had its x-axis scaled by a too-large factor.

In the pictures below, just the bottom section of the noisy local minimum is shown. The raw data is partitioned into independent data sets so that many different comparisons can be made, many of which are independent of each other. Each raw data set is partitioned into 3 independent smaller data sets, and then all 3×3 possible comparisons are made. The nine comparison graphs are shown together in nine colors, with nine quadratic fits and nine blue vertical lines indicating the nine minima of the nine parabolas. The second graph shows how the sets of data line up when each of the nine best-fit estimates are used, just to verify that the graphs actually do line up when scaled by the "optimal" factors.

I don't understand why the colored lines are so far below the black lines in the last couple of cases. In the last case the colored lines look to be about 1.5% too far to the right! Maybe there is a bias because of the huge factor by which the colored lines have been expanded. In the upper right, each point in the colored line is a huge cost, because the two graphs do not line up regardless of the factor. So that encourages the factor to be slightly larger, so that fewer of the colored line's points fit into the overlap region. Maybe this could be fixed by normalizing by the number of points in the colored line.

This error in the estimation means that these shapes are estimated to percolate more easily than the experiment indicates.

So in the graph below, the last couple of black dots are very slightly lower than they should be! The last one is too low by about its own height.

There is another problem: due to the largeness of the shape and the finiteness of the experiment, we might expect the experiment to indicate that percolation has not happened yet when in fact much better connectivity could be provided by points beyond the bounds of the experiment area. It may mean that the *experimental* limit at the right of the graph below is *greater* than one, or maybe the graph even shoots back up! Well, at 0.99 it experimentally looks to be about 1.25±0.05, eyeballing the raw data for a99.

That shows, for each radius ratio from .1 to .9, 3×3 black dots (overlapping, indistinguishable) for the 3×3 estimates of each percolation threshold found above. The fact that the 3×3 dots just look like one big single dot in each case is encouraging that the results are not too noisy.

The red dot at the lower right represents that an infinitely thin infinitely far annulus of area one needs an ENC of 1 to percolate.

The red dot at the upper left represents the known percolation threshold for circles. The blue around it shows the sort of error the points all have.

The red dot at 0.155 is actually 18 dots from two analyses of an experiment on squares. It indicates that squares, which have a CNP of around 4.42, are about halfway between the .1 and .2 shapes of annuli. The value 1.55 was picked by eye.

The blue dot at 0.06 is 9 dots from an experiment on hexagons. Hexagons are almost the same as circles.

The orange segments are each actually 81 segments connecting each of the 9 dots forming a given black dot to each of the 9 dots in the next black dot.

The cyan curve is a quadratic approximation to the black dots. It misses the red dots at the ends, so a quadratic approximation is not a good predictive model here.

Two shapes can overlap but not be bonded. They will not be bonded if there is white space between them, like where the yellow shapes touch the red shapes on the bottom. The antennas are weak in the needed direction. Being strong in a different direction is irrelevant. Two shapes are bonded only if they both reach the point halfway between their centers.

As 2 annuli get closer, there are 4 kinds of overlap.

First, they just overlap like discs would.

Second, they the overlap like discs, but the diamond is nibbled on each side by the inner circles.

Third depends on the shape:

Fat annuli have a diamond overlap minus two separate inner discs.

Thin annuli have 2 disjoint regions of overlap.

Fourth, they overlap in an almost annular shape.

The transitions are:

1 <-> 2 d = R + r

2 <-> 3a d = R - r

2 <-> 3b d = r + r

3a <-> 4 d = r + r

3b <-> 4 d = R - r

Hmm, since squares percolate between .1 and .2 annuli, their index should be between the .1 and .2 index, or between .57 and .52. Their overlap index is 9/16, or .5625. That would match them to about a .132 radius ratio annulus, instead of the .155 indicated by the percolation threshold graph.

For hexagons, the overlap area is an irregular but equiangular hexagon, so it tiles the plane, and we can find its area by finding its period. Its period is a hexagonal lattice with edges ±(B+C), ±(A+B-D), ±(C+D-A), where A=(-x,s-y), B=(0,s-y), C=(s-x,0), D=(s-x,-y), where (x,y) is the coordinates of the second hexagon's center, using axes aligned with the radial spokes forming the triangle (of side s) in the original hexagon that the second hexagon's center lies in.

{1:label, 2:color, 3:average overlap, 4:max radius, 5:shape drawing code, 6:percolation threshold list}

Converted by