|News: Did you know ? you can use LaTex inside Postings on fractalforums.com!|
|Welcome, Guest. Please login or register., Guest. Please login or register.||
December 12, 2013, 04:48:53 AM
|Previous Image | Next Image|
|Description: Mandelbox with scale=-1.5 and Julia vector=(0.43,0.193,-0.385).
Height: 720 Width: 1280
Posted by: trafassel March 16, 2010, 08:01:51 AM
Rating: 50 by 2 members.
Rating: by 2 members.
Image Linking Codes
|0 Members and 1 Guest are viewing this picture.|
|trafassel||March 23, 2010, 10:28:32 AM|
I would be nice, if you link to the Gestaltlupe package from you webpage. But be aware, that the the current public mandelbox formula in this package contains some errors. Indeed and all of my 3 current mandelbox pictures was generated by the wrong formula.
if m<r m = m/r^2
else if m<1 m = 1/m^2
if m<r m = m/sqrt(r)
else if m<1 m = 1/sqrt(m)
I will fix it in the next days.
|Tglad||March 22, 2010, 01:56:58 AM|
Fantastic (and your latest pictures too). I don't think 1 minute is at all slow for generating the geometry.
Do you mind if I link to this package from the web page I set up (http://sites.google.com/site/mandelbox)?
|trafassel||March 18, 2010, 09:52:18 AM|
This picture was generated with Gestaltlupe (source code is hosted by github.com). Rendering is a 2 stage process.
In the first stage the geometry is generated. The geometry data for each pixel of the resulting bitmap contains:
the coordinates of the corresponding point of the object surface,
the surface normal
and some additional infos (i.e. surface color).
The process needs long time (for the picture approximately 1 min. on my i7 laptop).
In the second stage the renderer generates the bitmap. The renderer knows for each pixel the distance from the camera; this information can be used to generate a depth of field. If you want to look into the implementation, you have to download the source from http://github.com/trafassel/Gestaltlupe and open the file sharpenderer.cs .
The first stage used multiple threads. This means, the computation is done on multiple processors.
The second stage runs in one thread and use only one processor. But, because the second stage is much faster than the first stage, I don't expect much performance boost of parallelize render parts (in this picture, the second stage needs 27 sec. on my i7).
|Tglad||March 17, 2010, 04:50:23 AM|
Awesome! may I ask what renderer you use that has depth of field? or is it a post effect?
|bib||March 16, 2010, 12:13:19 PM|
Neat effect, looks like a real object
Last modified by: bib March 16, 2010, 12:13:36 PM
Powered by SMF Gallery Pro