Title: Dynamic bucket size Post by: zephyrtronium on October 28, 2011, 12:08:56 PM I had an idea to reduce the memory usage involved in accumulating a histogram for rendering most IFS fractals. I've only really used Apophysis, so I don't know much about how other implementations handle buckets, but instead of just allocating buckets each of the same size for the entire histogram, one can divide the buffer into a series of blocks that each contain, say, 16-bit buckets, and then once any bucket in that block overflows, grow the entire block to contain 32-bit (or larger) buckets. This has hopefully-not-too-much speed overhead, but should reduce memory usage by a very significant amount, particularly for spacious systems. The block division also lends itself to locks for concurrent access to the histogram. Hopefully, the output of the system will be dense enough that growing a block due to one bucket overflowing will occur just before any one of a number of other near-full buckets in the same block. Some other ideas that go along with this:
This system would likely have noticeable sacrifices in speed, especially if the color-split idea is used and the separate blocks are stored in different pages; an advanced user should have the option to choose this over a single bucket size across the histogram. The program might also perform a test render to attempt to determine whether this model might be desirable (or simply analyze the gradient in the case of the color-split idea). Just a thought I had; dunno whether it's already been dreamed up. |