Title: zoomable Gigapixel Mandelbulb? Post by: cbuchner1 on November 27, 2009, 12:38:52 AM A distributed computing efford could result in a Gigapixel image depicting a Mandelbulb. Presented with an interface comparable to Google Maps (or similar) it would allow for panning and deep zooms. Any takers? Christian Title: Re: zoomable Gigapixel Mandelbulb? Post by: fractalwizz on November 27, 2009, 07:42:31 PM wait, u need people to help render this and create the interface?
In that case, i can help render. Got 8-core processor ready Title: Re: zoomable Gigapixel Mandelbulb? Post by: Snakehand on December 18, 2009, 07:27:08 PM wait, u need people to help render this and create the interface? In that case, i can help render. Got 8-core processor ready I have 2+ teraflops of GPU compute power, I can can serve them realtime once I get my ray tracer sorted out ;) Just make the web front end, and a squid cache for the images. I am writing this a little in jest, DE isn't all that easy on GPU. But seriously, I am surprised that people here are still using CPU. It shouldn't be to hard to have programs such as Ultra Fractal generate the C like kernel code and compile it with Open CL - which seems be a sure bet on an open cross platform standard or GPU computing. Personally I am using Brook+ mostly because the ATI OpenCl drivers for Linux is very much in beta. Title: Re: zoomable Gigapixel Mandelbulb? Post by: David Makin on December 18, 2009, 09:01:33 PM wait, u need people to help render this and create the interface? In that case, i can help render. Got 8-core processor ready I have 2+ teraflops of GPU compute power, I can can serve them realtime once I get my ray tracer sorted out ;) Just make the web front end, and a squid cache for the images. I am writing this a little in jest, DE isn't all that easy on GPU. But seriously, I am surprised that people here are still using CPU. It shouldn't be to hard to have programs such as Ultra Fractal generate the C like kernel code and compile it with Open CL - which seems be a sure bet on an open cross platform standard or GPU computing. Personally I am using Brook+ mostly because the ATI OpenCl drivers for Linux is very much in beta. How exactly would I use my ATI X600 ? Title: Re: zoomable Gigapixel Mandelbulb? Post by: Snakehand on December 19, 2009, 12:42:10 AM wait, u need people to help render this and create the interface? In that case, i can help render. Got 8-core processor ready I have 2+ teraflops of GPU compute power, I can can serve them realtime once I get my ray tracer sorted out ;) Just make the web front end, and a squid cache for the images. I am writing this a little in jest, DE isn't all that easy on GPU. But seriously, I am surprised that people here are still using CPU. It shouldn't be to hard to have programs such as Ultra Fractal generate the C like kernel code and compile it with Open CL - which seems be a sure bet on an open cross platform standard or GPU computing. Personally I am using Brook+ mostly because the ATI OpenCl drivers for Linux is very much in beta. How exactly would I use my ATI X600 ? Short answer: You can't use that particular card. Long answer: For over a decade I worked in the games industry, making leading edge 3D games, but even there the whole GPGPU thing managed to sneak unnoticed up on me. Sure there were a constant stream of buzzwords in the air: DX10, unified shaders, full floating point precision pipeline etc. And this was all very fine if you wanted pixel shaders, bump mapping and keep up with the competition in the graphical looks department. At some point it dawned on Nvidia that the high end gaming graphics card had many of the desired characteristics of a high performance computing system, and along came the ridiculously priced Tesla cards, aimed at the scientific community etc. With Cuda it also became possible to divert some of the computing power away from graphics, and to numerical work, such as physics. Today, with DX11 and Microsoft take on Cuda / OpenCL (Direct Compute) - almost any regular graphics card from Nvidia is Cuda / PhysX capable. Personally I only took notice when ATI announced the 5800 series card with 1440 / 1600 individual stream cores on chip. I reasoned that whatever the capability of the individual core, the combined power of such massive parallelism would be mind boggling. Then I read the specifications and instruction set for the R700 cores http://developer.amd.com/gpu_assets/R700-Family_Instruction_Set_Architecture.pdf (http://developer.amd.com/gpu_assets/R700-Family_Instruction_Set_Architecture.pdf) and discovered that unlike Nvidia, ATIs cores had a complete integer instruction set as well as the regular diet of single cycle trigonometric functions. I was sold, and simply had to get my hands on the first batch of 5850 cards which I thought was the most reasonably priced. And pricing is an important point to note: http://en.wikipedia.org/wiki/FLOPS (http://en.wikipedia.org/wiki/FLOPS) These ATI cards simply are the cheapest FLOPS money can buy at the moment. The fastes CPU at the moment (i7 965) retails for ~1000 USD, and delivers 70GFLOPS (double), but only when you can get your compiler to produce efficient SSE code. But even if you are on a budget, the i7 can be handsomely beaten performance wise. ATI 4600 series cards can be found for USD 60 on Amazon, they offer 320 streams processors giving roughly 220GFLOPS (real, not the doubled figure obtained by only considering the "multiply and add instruction") - and you don't need a new motherboard & DDR3 RAM to trounce the i7 which starts at USD 250. (CPU ony) I think we are at a watershed right now when it comes to computing performance. The next big leap on the desktop won't come from Intels research into 48 core CPUs, but more likely from such moves as AMDs plans to put 480 stream processors right on the silicon die together with the CPU. But more importantly, a gazillion flops won't do you any good if you can't use them. And I really do admire your fractal art, and should you decide to go the GPU way, I'll gladly give you the bits and pieces of code that I have written for the ATI GPUs. I am after all (like most people on the forum) a fellow explorer, and not out to make a quick buck on fractal fad of the month ;) Now if I can only get your DE working properly... Title: Re: zoomable Gigapixel Mandelbulb? Post by: David Makin on December 19, 2009, 01:11:52 AM <snip> Now if I can only get your DE working properly... :D I knew I couldn't do much with this card ! Is it worth the trouble of installing and coding for the X1900XT that I've been given or is that just too out of date too ? (I need a new PSU to use it). As to my delta DE method, ask any questions you like, but you may find it helpful to look at my implimentation for UF: http://www.fractalgallery.co.uk/MMFwip3D.zip (http://www.fractalgallery.co.uk/MMFwip3D.zip) It's easiest to view the file in UF but since the UFM's are only text files in disguise you could use any text editor you like. The code you want is in the init: section - search for "init:" first. Then find the iteration loop which is within the ray-step loop and then check the code for using the smooth iteration value from 2 step positions to get the DE value (if in delta DE mode you'll notice there are two distinctly different passes through the ray-step loop). There is code in there for both the analytical DE and the delta DE methods, also there is an extension to the delta DE method that checks for "missed" solid - basically this is triggered if at one main step positiion we are travelling "towards" solid and on the next we are travelling "away" from solid. When this happens a binary search is initiated to find the smallest DE value between the step positions in case that passes the "solid" threshold test, if not then we just resume stepping from the point where the "missed solid" test was triggered. This missed solid test may sound horrendous but (at least on the CPU/FPU) it's actually a very cheap way of reducing render errors due to over-stepping - roughly equivalent to an alternative of reducing *all* step sizes by 50% or so - which is pretty costly. Title: Re: zoomable Gigapixel Mandelbulb? Post by: Snakehand on December 19, 2009, 01:42:42 AM <snip> Now if I can only get your DE working properly... :D I knew I couldn't do much with this card ! Is it worth the trouble of installing and coding for the X1900XT that I've been given or is that just too out of date too ? (I need a new PSU to use it). Here is a list of ATI cards that you can consider: http://developer.amd.com/gpu/ATIStreamSDK/pages/ATIStreamSystemRequirements.aspx (http://developer.amd.com/gpu/ATIStreamSDK/pages/ATIStreamSystemRequirements.aspx) Thanks for more pointers on DE. Part of the problem of writing efficient GPU code is that programs run in "wavefronts" over multiple units. Branching and divergent execution paths then becomes a problem, since the wavefront share a common instruction bus. They are all fed the same instructions in a lock-step fashion. On a CPU an if-then-else statement that executes either A or B, will take whatever time A or B takes to execute. On the GPU however the time will be A + B since both paths must be stepped through. The individual processors suspend execution instead of "jumping over" the path not taken. When A and B are big loops, this becomes even more of a problem, and restructuring may be needed. Title: Re: zoomable Gigapixel Mandelbulb? Post by: David Makin on December 19, 2009, 02:02:27 AM Thanks for more pointers on DE. Part of the problem of writing efficient GPU code is that programs run in "wavefronts" over multiple units. Branching and divergent execution paths then becomes a problem, since the wavefront share a common instruction bus. They are all fed the same instructions in a lock-step fashion. On a CPU an if-then-else statement that executes either A or B, will take whatever time A or B takes to execute. On the GPU however the time will be A + B since both paths must be stepped through. The individual processors suspend execution instead of "jumping over" the path not taken. When A and B are big loops, this becomes even more of a problem, and restructuring may be needed. I figured that may be a problem - I was wondering if the newer systems had got a work-around though - obviously not :) Title: Re: zoomable Gigapixel Mandelbulb? Post by: cbuchner1 on December 19, 2009, 02:11:17 AM <snip> Now if I can only get your DE working properly... :D I knew I couldn't do much with this card ! I think the PixelBender should at least be able to work with this card. I think it can split up larger computation kernels into smaller shader kernels that it feeds to the GPU. Have you tried whether you get accelerated rendering with Pixelbender? Christian Title: Re: zoomable Gigapixel Mandelbulb? Post by: David Makin on December 19, 2009, 02:15:37 AM <snip> Now if I can only get your DE working properly... :D I knew I couldn't do much with this card ! I think the PixelBender should at least be able to work with this card. I think it can split up larger computation kernels into smaller shader kernels that it feeds to the GPU. Have you tried whether you get accelerated rendering with Pixelbender? Christian I haven't the foggiest idea, I've never used Pixelbender or anything like it - what software do I need to use the Pixelbender Mandelbulb renderer ? Title: Re: zoomable Gigapixel Mandelbulb? Post by: cKleinhuis on December 19, 2009, 02:33:53 AM just download pixelbender
http://labs.adobe.com/technologies/pixelbender/ and load the subblu filter: http://www.subblue.com/blog/2009/12/13/mandelbulb and play around, you can directly enter new formulas :) i am also playing around with that script, who wants do do a nice mandelbulb gpu gui :D especially the camera movement should be more "interactive" ;D Title: Re: zoomable Gigapixel Mandelbulb? Post by: David Makin on December 19, 2009, 02:42:31 AM just download pixelbender http://labs.adobe.com/technologies/pixelbender/ and load the subblu filter: http://www.subblue.com/blog/2009/12/13/mandelbulb and play around, you can directly enter new formulas :) i am also playing around with that script, who wants do do a nice mandelbulb gpu gui :D especially the camera movement should be more "interactive" ;D Thanks, I think I'll leave it for now - the X600 is not listed as a supported card, the X1900 is listed so I may try Pixelbender when I install that (if I ever get around to upgrading my PSU). I checked the supported cards for the ATI stream SDK thingy - even the X1900XT is not listed :( Title: Re: zoomable Gigapixel Mandelbulb? Post by: Buddhi on December 19, 2009, 02:00:08 PM Pixel Blender looks very interesting but for me it is VERY VERY EXPENSIVE toy :(. Of course PixelBlender plug-in is for free but Photoshop CS4 costs around 1000USD :scared:. Cost of GFX card is negligible in compare with cost of this software.
I will stay on completely free Ubuntu Linux and Eclipse C++ ;D Title: Re: zoomable Gigapixel Mandelbulb? Post by: David Makin on December 19, 2009, 02:15:39 PM Pixel Blender looks very interesting but for me it is VERY VERY EXPENSIVE toy :(. Of course PixelBlender plug-in is for free but Photoshop CS4 costs around 1000USD :scared:. Cost of GFX card is negligible in compare with cost of this software. I will stay on completely free Ubuntu Linux and Eclipse C++ ;D I got the impression you could use Pixelblender without having Photoshop, is that not the case ? I also cannot afford *any* Adobe commercial software :( Title: Re: zoomable Gigapixel Mandelbulb? Post by: JosLeys on December 19, 2009, 05:10:37 PM I installed it on my Toshiba laptop, but Pixelblender does not think it is smart enough.
When I open the Mandelbulb 'filter' it says ; "This kernel is too complicated for your graphics card" How about that for a nice message? Title: Re: zoomable Gigapixel Mandelbulb? Post by: KRAFTWERK on June 10, 2010, 02:13:19 PM DONE! O0 http://www.fractalforums.com/mandelbulb-renderings/400-square-meters-of-mandelbulb-3-3-gigapixels/ Now, anyone for a TERApixel bulb? :) And BTW, to answer an old question in this thread, yes, you can run pixelbenderscripts in the free adobe application "Pixelbendertoolkit" Download here: http://www.adobe.com/devnet/pixelbender/ WOW, I see now that the 2.0 version is there!! |