Voxel stack is awesome, unfortunately it requires a specially written MC algorythm that can detect the true border in every cube, that handles a lot of memory and that discards pure B/W cubes, and i don't think it would make optimal mesh and so forth...
If is blur all the voxel stack images, the limits of BW would be grey, and then i could send all the grey dots as vector3's to a point cloud array.
Meshlab can handle point clouds pretty well, and there are different import formats probably for poisson disk cloud samples which meshlab can turn into mesh nicely...
Advantage is that a point cloud takes 1000dth of the memory compared to the PNG images as a file format, and mesh samples which meshlab uses, (for example poisson mesh samping) are arrays of vertices perhaps that can be perhaps loaded straight into meshlab and that make very good results.
Perhaps what that means, is to export vertex sample maps and/or point clouds from Mandelbulb3D, very low on memory, all you would have to do is change the voxel stack algorythm a tiny bit so that it charts a vertex every time a pixel changes from Black to white. If verteices arrays type point clouds were exported from the program it would be possible to infer their 3D normals perhaps and there is probably a vertex+normal samples format that can be imported to Meshlab.
I don't know if it's a briliant idea but it's one to keep tabs on, meshlab is crashing on my AMD desktop pc i have to run it on the i7 laptop to say more about it.
I figure that sample/vertex files can be saved as stl/obj without the triangle array and perhaps meshlab can import that, and point cloud formats i don't know.
I can write a program that makes samples of the voxel stack based on the border and that generates some nice mesh from it it's probably super easy. Doing to vertex arrays with normals is trivial, it's just the border pixels, and they can be saved to a tiny format with huge data. Then it would be possible to apply linear simplifaction to every level to perhaps devide the number of vertices by 100 while keeping the same actual spacial information. either way, it would be relatively very easy to code and it could mean direct obj export from M3D and there could be octree search of the vertices for every layer which would mean faster than current speed too. ill see if i can figure a workaround.
currently searching for tomography to point cloud programs...
found a squirrel.