kram1032
|
|
« Reply #75 on: July 25, 2010, 11:52:31 AM » |
|
if you use smooth iterations, you'd get a continuous spectrum... (Or am I wrong with that? ) Also, if you use fairly high iteration counts, you'd get fairly smooth spectra aswell... Too bad I have a ATI that came before all the OpenCL stuff... For CUDA, I'd need a NVIDIA, I guess?
|
|
|
Logged
|
|
|
|
hobold
Fractal Bachius
Posts: 573
|
|
« Reply #76 on: July 25, 2010, 07:12:40 PM » |
|
CUDA is limited to Nvidia hardware. The various CUDA versions track the evolution of the hardware. So the newest features require the latest GPUs.
OpenCL is newer, and specifically created as a cross platform interface. As far as I am aware, AMD provides implementations for both their 'x86 CPUs and their (i.e. ATI's) newer GPUs. I believe there is an experimental implementation for IBM's Cell processor. And Nvidia supports it as well, even though it is theoretically a threat to CUDA.
Right now, CUDA is more feature rich and more mature then OpenGL. But the industry at large doesn't like lock in to a single vendor much. So my personal expectation is that OpenCL will quickly reach a point where it is reliable and available. But that is only one opinion.
Old GPUs are unlikely to retroactively gain support for CUDA and/or OpenCL. In most cases, the hardware simply doesn't have the capability.
|
|
|
Logged
|
|
|
|
kram1032
|
|
« Reply #77 on: July 25, 2010, 09:56:37 PM » |
|
yeah I know. ...which currently stops me from checking out your app. Neiter CUDA (which would have worked for a Graphics Card of the same age, if it only was a NVIDIA) nor OpenCL for me
|
|
« Last Edit: July 26, 2010, 06:48:27 PM by kram1032 »
|
Logged
|
|
|
|
cbuchner1
|
|
« Reply #78 on: July 26, 2010, 06:23:23 PM » |
|
Neiter CUDA (which would have worked for a Graphics Card of the same age, if it only was a NVIDIA) not OpenCL for me Well sorry, but you're missing out on the new color feature Source and binary attached. And the good news is that it isn't noticably slower than the previous grayscale version (given that you use the same maximum iteration count). With these command line parameters you can now pass in maximum iteration count per R,G,B channels: For example to get good old grayscale you'd use these arguments (note that these ARE case sensitive) --maxR=1000 --maxG=1000 --maxB=1000 Defaults for maxR,G,B are 1000, 200, 40 so that's always a factor 5 between color channels I have to admit that it is more difficult to get aesthetically pleasing deep zooms with colors, because my code for doing HDR to LDR mapping treat the RGB channels separately but with the same parameters. Hence one color channel may be overexposed, whereas the other appears is too faint. You'll see some of these problems in the second screenshot So I need to do some research on tone mapping algorithms. Remember to grab the DLL archives from previous posts if you don't already have them and put the contents into the Release folder where the EXE file is. If you want to get into GPU accelerated buddhabrot exploration sub $100, I'd recommend the nVidia GT240 card (96 shaders). Sub $200 I can recommend the GT 470 with 768MB RAM - but that's a power guzzler.... ATI may be faster at the same price, but doesn't have CUDA compatibility. I am officially getting bored with the "traditional" coloring method, so I am venturing into physics based models (wavelength to RGB color mapping etc). Check back tomorrow for updates... Christian
|
|
« Last Edit: July 26, 2010, 10:55:31 PM by cbuchner1 »
|
Logged
|
|
|
|
kram1032
|
|
« Reply #79 on: July 26, 2010, 06:51:19 PM » |
|
that's a really nice render yet again
|
|
|
Logged
|
|
|
|
cbuchner1
|
|
« Reply #80 on: July 26, 2010, 11:16:06 PM » |
|
that's a really nice render yet again I think I can top this. Bring on the containment booms: we've got a fractal leak! This is what you get when you map orbit length to a wavelength and then map that this a RGB color that the orbit will contribute. Simply fractastic. Also check out the larger images I posted to the image gallery. The new coloring method is pretty intense. I am attaching the new binary here. There are .BAT files in the Release folder now to launch the program in various coloring modes. Edit these files to suit your likings (initial window size, color parameters etc) Remember to grab the DLLs from previous postings in this thread if you don't have them yet. Except for a screenshot and load/save coordinates feature, I think my program is now almost ready to be used for exploration.
|
|
« Last Edit: July 27, 2010, 01:16:17 AM by cbuchner1 »
|
Logged
|
|
|
|
cbuchner1
|
|
« Reply #81 on: July 27, 2010, 03:28:04 PM » |
|
directly map either the iteration count or the periodicy to the frequency of the light spectrum and then convert that to RGB, possibly based on receptor messures of the human eye.
Iteration count mapped to a spectral wavelength would mean I would only get the "pure" colors found in a rainbow. Still sounds nice. Maybe it will give the look of a soap bubble. Indeed some renders have the "oil sheen" or "soap bubble" effect. Thank you for the inspiration. The code I used to map wavelength to RGB is essentially this one http://www.physics.sfasu.edu/astro/color/spectra.html , but ported to C/CUDA
|
|
« Last Edit: July 27, 2010, 03:29:48 PM by cbuchner1 »
|
Logged
|
|
|
|
ker2x
Fractal Molossus
Posts: 795
|
|
« Reply #82 on: July 27, 2010, 06:00:22 PM » |
|
I'm planning to rewrite some part of my app to use Image2D. nice tutorial here : http://www.cmsoft.com.br/index.php?option=com_content&view=category&layout=blog&id=115&Itemid=172 Edit : i planned to .... but starcraft 2 is out \o/ brb
|
|
« Last Edit: July 27, 2010, 07:57:37 PM by ker2x »
|
Logged
|
|
|
|
cbuchner1
|
|
« Reply #83 on: July 27, 2010, 09:31:34 PM » |
|
I'm planning to rewrite some part of my app to use Image2D. nice tutorial here : http://www.cmsoft.com.br/index.php?option=com_content&view=category&layout=blog&id=115&Itemid=172 Edit : i planned to .... but starcraft 2 is out \o/ brb ah, the equivalent of texture access in CUDA. I use this only in the post filtering technique for noise reduction, but the algorithm is currently too aggressive. I prefer the look of the original (noisier) images. I could also use textures to map iterations to color. Basically a freely definable color texture would be more versatile than a fixed wavelength to RGB mapping formula. And it could make use of a second dimension too. I also gamed a lot during the last week, mainly "Alan Wake" on xbox 360 and "Dead Space" on PC (with nVidia 3dVision goggles)
|
|
|
Logged
|
|
|
|
kram1032
|
|
« Reply #84 on: July 27, 2010, 10:33:32 PM » |
|
yay that looks great So I had a good idea Does this colouring technique generally feature different aspects of the fractal or do the overal details look just like with usual nebulabrot colouring methods? (eg: how does the full object look like in comparison to the standard colouring method? )
|
|
« Last Edit: July 27, 2010, 10:43:09 PM by kram1032 »
|
Logged
|
|
|
|
cbuchner1
|
|
« Reply #85 on: July 28, 2010, 12:44:31 AM » |
|
Does this colouring technique generally feature different aspects of the fractal or do the overal details look just like with usual nebulabrot colouring methods? (eg: how does the full object look like in comparison to the standard colouring method? ) Generally the new method renders everything to the configured maximum iteration count, however some short or long orbits may be outside of the visible spectrum, depending on how the mapping for iterations to wavelength is configured. Now the current wavelength to RGB code limits the wavelength to 380 to 780nm, values larger or smaller are bounded to the limits. The color intensity near the edges gets reduced somewhat, but due to the bounding this does never entirely fade to "invisible" light emission This may be one of the reasons why there is so much red in the deeper zooms. I will try making the function unbounded and fade to zero at the edges and see if this will improve the situation. Here are more ideas I am toying with: A) I am playing with the idea to render the whole thing in three dimensions because I own one of these modern 3D monitors and shutter glasses. (two views are generated, either using orbit length as a z value and adding a distance-dependent parallax displacement - or alternatively rendering a Buddhagram and rotating one axis slightly to create parallaxed views), B) not just creating emissive light spectra, but also absorption spectra similar to the ones observed in space. So some orbits or iteration counts would then remove some light again. I need to think more about this, and how it could be integrated into the render process (alpha blending or similar methods?). Individual orbits - possibly very long ones - would then swallow light at certain wavelengths.
|
|
« Last Edit: July 28, 2010, 01:38:45 AM by cbuchner1 »
|
Logged
|
|
|
|
kram1032
|
|
« Reply #86 on: July 28, 2010, 02:31:20 AM » |
|
All those ideas sound awesome I'd love to see a buddhagram version of this You could also try a buddhabulb... or a, err.... bulbagram? However, all I meant was, how does the full buddhabrot spectral version look like? (As you did show a zoom of it^^)
|
|
|
Logged
|
|
|
|
|
ker2x
Fractal Molossus
Posts: 795
|
|
« Reply #88 on: July 28, 2010, 09:01:57 PM » |
|
I found a weird optimization. It is, in fact, documented in NVidia OpenCL Best Practice Guide : Register dependencies arise when an instruction uses a result stored in a register written by an instruction before it. The latency on current CUDA-enabled GPUs is approximately 24 cycles, so threads must wait 24 cycles before using an arithmetic result. So this code : while( (iter < maxIter) && ((zr*zr+zi*zi) < escapeOrbit) ) { temp = zr * zi; zr = zr*zr - zi*zi + cr; zi = temp + temp + ci; //etc .... }
is faster than : while( (iter < maxIter) && ((zr2+zi2) < escapeOrbit) ) { temp = zr * zi; zr2 = zr * zr; zi2 = zi * zi; zr = zr2 - zi2 + cr; zi = temp + temp + ci; //etc .... }
|
|
|
Logged
|
|
|
|
ker2x
Fractal Molossus
Posts: 795
|
|
« Reply #89 on: August 07, 2010, 02:10:59 AM » |
|
i found an insane bug in my code. I was generating random complex point only in the range of visible screen, instead of point in the range of -2 to 2. Oddly enough, the result was not bad at all. I patched it, but it's take much more time to generate a good looking buddhabrot
|
|
|
Logged
|
|
|
|
|