Logo by mclarekin - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Check out the originating "3d Mandelbulb" thread here
 
*
Welcome, Guest. Please login or register. March 28, 2024, 09:42:16 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1] 2 3 4   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Exporting Z-buffer and transparency  (Read 10151 times)
0 Members and 1 Guest are viewing this topic.
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« on: May 31, 2015, 07:15:56 PM »

Hey all,

Apologies if I missed if it was discussed anywhere earlier.
I wonder if and how could it be done to export transparency and Z-Buffer?

Help well appreciated.
Thanks!
Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #1 on: May 31, 2015, 07:38:29 PM »

on the C++ side this can be used
Code:
void QOpenGLFramebufferObject::blitFramebuffer(QOpenGLFramebufferObject * target, const QRect & targetRect, QOpenGLFramebufferObject * source, const QRect & sourceRect, GLbitfield buffers = GL_COLOR_BUFFER_BIT, GLenum filter = GL_NEAREST)
GL_COLOR_BUFFER_BIT is the default but GL_DEPTH_BUFFER_BIT is also valid
Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« Reply #2 on: May 31, 2015, 07:54:36 PM »

Many thanks.
Would you be so kind to point me to an example of how this can be added?
I have basic scripting skills and quite new to C++ and fragmentarium.

Would I need to render twice same anim or an extra output can be added?
Perhaps you would be so kind to add it somewhere to a wishlist?
Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #3 on: May 31, 2015, 09:08:05 PM »

google "qt5 opengl tutorial" for code or "glsl tutorial" for shader scripts and Fragmentarium comes with a LOT of shader code, tutorials and examples, a good idea would be to track down some shader or effect that you like or find interesting and disect it until you understand it, then try writing your own, there is a lot of reference material hidden away in threads here on FF wink
Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #4 on: May 31, 2015, 11:34:24 PM »

Hi Patryk,

The alpha and depth values are not written by the shaders at all, so you cannot retrieve them on the C++ side.

But you could modify for instance 'DE-raytracer.frag' to optionally output depth (and perhaps alpha), by adding code such as:
Code:
#group Raytracer
uniform bool RenderDepth; checkbox[true]

vec3 trace(vec3 from, vec3 dir, inout vec3 hit, inout vec3 hitNormal) {
    ...
    ...
    ...
    if (RenderDepth) hitColor = vec3(totalDist);  // <-- output depth
    return hitColor;
}

This will only give you 256 z-buffer levels, though.
Logged
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #5 on: June 01, 2015, 12:18:37 AM »

I am using this bit to populate the depth buffer, as the last line in trace() before returning hitColor, for the spline path occlusion, this does make it available on the C++ side, look in Examples/Include/DE-Raytracer.frag
Code:
gl_FragDepth = ((1000.0 / (1000.0 - 0.00001)) +
(1000.0 * 0.00001 / (0.00001 - 1000.0)) /
clamp(totalDist/length(dir), 0.00001, 1000.0));
I think the depth buffer can be set at 24 or 32 bits depending on your hardware too.

I haven't tried saving it or using it for anything other than spline paths , I was thinking point cloud sent to geometry shader? but it would be full of holes as anything not seen would not exist in the depth buffer, unless the point cloud is constructed from several passes viewing from different angles or maybe slices.
Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #6 on: June 01, 2015, 04:53:50 PM »

Ah - I should probably say that I meant the 'classic' Fragmentarium. I forgot you implemented some OpenGL integration.

Still, I think it would be demanding - even being able to save a 24-bit or 32-bit channel image is difficult. (I managed to get HDR import into Fragmentarium, but I never got HDR export to work).
Logged
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« Reply #7 on: June 06, 2015, 11:52:54 AM »

Thank you guys. And huge thanks 3dickulus for plans on integrating Z-depth you mentioned in the main thread.
Let me elaborate more on what we need in the ideal world. Again, this is all coming from the desire of integrating fractals in the proper VFX pipelines.

Z-buffer
- The Z-buffer is important for us because it allows us to play with the images in post production with quite a lot of flexibility. Not having Z-depth is working with tied hands.
- Same goes for transparency, but if there is a good Z-depth saved along with the images, it's easy to subtract the 'background' based on Z-depth in post. So Z-depth is crucial

Now we hit another point.

color depth for both main images and z-depth
- I did not investigate that yet with Fragmentarium, but looking at what you said I recon currently there's only 8bpp output.
- the standard for VFX and postproduction pipelines require at least 16bits per channel. 8bpp limits image manipulation significantly.
- for demanding applications we go for 32bpp.

File formats
- Generally we can work with anything - be it .png or .tiff or other formats. Crucial is to have sufficient color depth.
- Often .dpx files are being used. I might only assume these are not 'by default' supported by system, so adding may be a bit more time consuming. It is not a priority.

Saving Z-depth along with main image
Looking at the perspective of saving Z-Depth along with the images, I can suggest the following options to make it convenient:
→ (A) Worst case scenario, but still better than no Z-Depth: rendering both Z-depth and main to one image, one above the other. I could work with it, but it would add processing time to split this in post to make it usable.
→ (B) Simple and easy - outputting main and z-depth in 2 separate files, named commonly, only with "Z-buffer" suffix for ZD, followed by frame number
→ (C) Pro solution - I would suggest looking into OpenEXR format (http://www.openexr.com/) developed by Industrial Light & Magic. It allows for various color depths, has compression algorithms, but what's most important here, it allows to save multiple layers into the same file and is being often used in post-productions pipelines. Today we speak about saving only Z-buffer along and we can live with separate file outputs. But looking towards a future-proof solution, OpenEXR will be a great choice allowing to save many different channels/layers into same file efficiently.

At what depth you are handling the buffer? Are we limited on the buffer level or that's just a matter of saving to file bottleneck?

Just my suggestions for the future development.
Hope this is anyhow helpful.

Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
visual.bermarte
Fractal Fertilizer
*****
Posts: 355



« Reply #8 on: June 06, 2015, 06:20:12 PM »

Hi Patryk, first let me say that I like this sw, but working with GPU floating point precision you will not be able to enlarge objects such as when using a software based on CPU.
Whereas you probably need double precision support, it is mandatory to find a suitable GPU or you can better use directly Mandelbulber, sw that is able to create all those layers that you're specifically asking for.
But you are free to use another sw, obviously.  smiley
Logged
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #9 on: June 06, 2015, 07:02:48 PM »

At what depth you are handling the buffer? Are we limited on the buffer level or that's just a matter of saving to file bottleneck?

Internally, the accumulation buffer in Fragmentarium can be 8, 16, or 32 bit (only 32 bit is float - and it is also the default. 8 and 16 is integer). But these cannot be exported: in Qt the QImage image class only supports 8bpp, which means we can only save PNG in 8-bit (at least using Qt). Also some of the code, like putting the tiles together when doing tiled rendering uses the QImage class and is thus 8bpp only. QGLFramebufferObject.toImage() is 8bpp too. There are workarounds for all this, and I guess OpenEXR could be supported but it is quite some work.
Logged
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #10 on: June 06, 2015, 07:28:25 PM »

thankyou Syntopia, I was just crafting a similar reply... the ILM EXR format does look interesting but would require a rewrite of all of the image handling.

edit: could one develop a shader in Fragmentarium or shadertoy and then use something like http://www.fractalforums.com/index.php?topic=21755.msg84293#msg84293 (with some special coding) to generate image/buffer files at the bitdepth required, I have seen glsl to pascal and glsl to c converters so the higher cpu precision could be exploited this way.
« Last Edit: June 06, 2015, 08:04:45 PM by 3dickulus, Reason: Q » Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #11 on: June 06, 2015, 08:42:37 PM »

just looking at EXR stuff...
you could render images with 3 or 4 different exposure settings and combine them for a final image but this has the obvious effect of tripling or quadrupling the render time,
also one has control of camera features like brightness contrast exposure etc so you could also just set it up right in the first place instead of rendering extra data (a feature of EXR is like having many exposures embedded in the image file, I know it's more complicated than that though)
that gets excluded in the end anyways (in post I imagine you look at a scene, adjust the exposure to bring out/in details, and say "that's the one, print")
by the very nature of the display medium/device ie:the colour resolution of your monitor or film printer or eyes.
In Fragmentarium you can morph all camera settings over any range of frames in a number of ways so if an exposure or other cam param transition is required to bring out /in detail it can be done pre in one render instead of post after 3 or 4, but of course we end up with an RGBA8 on our monitors.

Difficulty in programming aside, I ask myself "how useful is this for shader development" or more simply "on my desktop".
Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #12 on: June 06, 2015, 10:34:40 PM »

edit: could one develop a shader in Fragmentarium or shadertoy and then use something like http://www.fractalforums.com/index.php?topic=21755.msg84293#msg84293 (with some special coding) to generate image/buffer files at the bitdepth required, I have seen glsl to pascal and glsl to c converters so the higher cpu precision could be exploited this way.

Isn't it this discussion you were thinking of? http://www.fractalforums.com/programming/compilerun-glsl-shader-as-cplusplus/

just looking at EXR stuff...
you could render images with 3 or 4 different exposure settings and combine them for a final image but this has the obvious effect of tripling or quadrupling the render time,
also one has control of camera features like brightness contrast exposure etc so you could also just set it up right in the first place instead of rendering extra data (a feature of EXR is like having many exposures embedded in the image file, I know it's more complicated than that though)

There is no need for storing multiple exposures when using floats as these hdr-formats do. The images should be grabbed from the Fragmentarium 32-bit float buffers which has an extremely high dynamic range.

Logged
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #13 on: June 06, 2015, 10:56:00 PM »

yes, that one and https://code.google.com/p/delphi-shader/
yes that is kind of my point, internally it is about as good as we can get on consumer hardware, developing a Fragmentarium-like app with the intention of going to film is a different animal compared to the exploration of fractals on my desktop.
Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #14 on: June 07, 2015, 12:09:44 AM »

Just tried the delphi-shader - which is nice, but also very slow.

I tried changed a Fragmentarium shader to use doubles (just insert #version400 at the top, and change vec3 -> dvec3, float -> double), which made it run 36.6x slower. (Rather close to the expected theoretical factor 32x for the Nvidia Maxwell GPU I use). Still it was faster than the Delphi-shader (based on the roughly similar Mandelbulb example)! And Maxwell is really bad at double precision: in particular some AMD GPU's are running 1/4th in DP: http://www.geeks3d.com/20140305/amd-radeon-and-nvidia-geforce-fp32-fp64-gflops-table-computing/ (The pro cards, FirePro and Tesla, run double precision at half speed).

So the question is whether running on the CPU is worth it.
Logged
Pages: [1] 2 3 4   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
rendering text into an image buffer in C++ Programming aluminumstudios 2 2692 Last post July 31, 2010, 03:58:22 AM
by aluminumstudios
Exporting 3D files Meet & Greet « 1 2 » william duffy 15 10299 Last post March 02, 2011, 02:41:31 AM
by David Makin
Z Buffer and animation in Mandelbulb Introduction to Fractals and Related Links Gylded_Khakatrice 1 3018 Last post October 13, 2011, 06:45:57 PM
by DarkBeam
Exporting parameters from interpolated animation sequences? Mandelbulb 3d morbidorbits 6 4216 Last post July 06, 2012, 10:46:35 PM
by morbidorbits
How to add DOF afterwards to a Mandelbulb 3D image with Z buffer? Format, Printing & Post Production schizo 1 5967 Last post November 09, 2014, 01:20:24 AM
by ellarien

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.735 seconds with 28 queries. (Pretty URLs adds 0.026s, 2q)