A sample array is progressive when the distribution of samples stays the same no matter where you start to sample and for how long you do so. Let's say you have a progressive array of 64 samples. You can expect the distribution of samples 2-9 to be roughly the same as samples 5-48.
A progressive sample array can have many advantages, but the reason I programmed this in the first place is
the following:
Progressive sample arrays are very useful when it comes to dithering, i.e. distributing sample locations
spatially. When doing this one has the option to shift positions or the magnitude of the offsets.
The first option introduces unwanted blur due to shifting positions, and shifting of individual sample
positions would be performance intensive. The second option will still retain the same offset directions,
thus creating a visible pattern, especially with lower sample counts.
This is where progressive arrays come into play: Instead of shifting the positions or scaling the offsets,
you simply shift the starting sample position in the array.
This allows for effective dithering with little performance impacts and no visible artifacts.
This however requires the array to be progressive, since if it weren't the shift in starting positions would
cause changes in sample distribution, thus creating significant artifacts (and also defeating the purpose of
the dither).
So, with this tool you can convert any 2d sample array into a progressive one and enjoy the benefits! This
tool does not change the contents of the array, it simply shifts around the elements to maximize the
progressiveness of the array.
The algorithm I created is based on evaluating how progressive a given array is.
The function evaluates this by checking every possible sequence you could slice the array into, and
calculating the variance aswell as the average.
Then it calculates the variance of the averages and the variance of the variances of all slices. This tells
the algorithm how similar the slices are in distribution.
The higher the similarity, the higher the progressiveness of the given array.
Using this information, it fires up a Monte-Carlo Simulation. Every iteration (step) two random array
elements are swapped and the progressiveness is evaluated.
If the progressiveness has increased, we keep this order, if not we revert and try again in the next step.
Why doesn't the algorithm just check every possible combination?
The amount of permutations for a sequence of length n is n! (n factorial). For an array of length 16 there
would already be 20.922.789.888.000 combinations, and no computer can calculate that in a reasonable amount
of time.
The complexity of the progressiveness evaluation is O(n³), so it increases cubically.
An array of length 64 will take roughly 64 times longer to compute than an array of length 16, despite it
only being 4 times the size.
The code will show here