Logo by DsyneGrafix - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Visit us on facebook
 
*
Welcome, Guest. Please login or register. April 23, 2024, 04:08:25 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1] 2   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: How do I implement supersampling?  (Read 11924 times)
0 Members and 1 Guest are viewing this topic.
Gore Motel
Forums Newbie
*
Posts: 4



WWW
« on: April 26, 2014, 10:23:44 AM »

Hello! I've made a little web-based fractal flame generator, following Scott Draves' paper and the algorithm explanation from fractorium, found here. Both mention that supersampling should be used. I also read the following on the Wikipedia page for flame fractals:

Quote
To increase the quality of the image, one can use supersampling to decrease the noise. This involves creating a histogram larger than the image so each pixel has multiple data points to pull from.

For example, creating a histogram with 300×300 cells in order to draw an 100×100 px image. Each pixel would use a 3×3 group of histogram buckets to calculate its value.

I unfortunately do not understand how to implement supersampling. Could someone give me some hints?

Let's say that I want to draw a small image, 100x100 like in the example above. I want to do supersampling 3x, so my histogram will be something like this: Histogram[300][300].

I get a random point in the bi-unit square, feed it into functions, check if it's valid, and compute its color. Now I'm supposed to put it in the histogram. If the coordinates of my point are for example, pointX and pointY, and I'm doing 3x supersampling, I guessing that it'll go in the histogram at [pointX * 3][pointY * 3].

But now I'm just putting the point at a higher index in the histogram. How do I go about creating the 3x3 groups of histogram buckets?
Logged
hobold
Fractal Bachius
*
Posts: 573


« Reply #1 on: April 26, 2014, 11:41:11 AM »

Here's another way to think of supersampling:

Let's keep all parameters as they are in your example: we want to end up with a 100 by 100 image, but supersample every image pixel with 3 by 3 samples.

So as an intermediate result, we multiply image size by supersampling factor: 100 * 3 = 300, and internally compute a 300 by 300 image with the method of computation that we already have.

In order to arrive at our targeted 100 by 100 image, we have to properly(!) shrink the 300 by 300 image that we produced. In other words, we have turned the original supersampling into an equivalent downsampling problem.

The general theory behind sampling and filtering signals is rich and interesting, but for starters we can skip it all and just notice that both pixel grids (the 100 x 100 and the 300 x 300), when drawn at the same absolute size, are very closely related:

every pixel of the 100x100 grid exactly covers a small tile of 3 by 3 pixels of the 300x300 grid.

So in order to make use of all information of the larger 300 by 300 image, we can simply compute the average colour of the 3x3 small pixels that are covered by a single large pixel, and write that average value into the corresponding spot of the target 100 by 100 image.
Logged
Gore Motel
Forums Newbie
*
Posts: 4



WWW
« Reply #2 on: April 26, 2014, 07:05:08 PM »

Yes, that does make perfect sense. So the following is a 3x3 bucket in the large grid:



I've got a point and its color, let's say that it's red. There it is, in the top left corner of the bucket. But the other points in the bucket are empty, black. How are the other points in the bucket getting filled? At the moment I'm just making a really dark pixel, because I'm getting the average of one brightly colored pixel and eight black pixels.
« Last Edit: April 26, 2014, 07:08:09 PM by Gore Motel » Logged
hobold
Fractal Bachius
*
Posts: 573


« Reply #3 on: April 26, 2014, 07:30:42 PM »

That's how it ought to be; the average of a little shining red and a lot of black is dark red.

The problem that you are running into is that theoretically, you want to display infinitesimally small points, or infinitesimally thin lines. With higher supersampling factors, those structures will vanish more and more, as their surface area keeps shrinking when the intermediate pixels get smaller.

You will have to artificially increase brightness after downsampling to compensate for the shrinking "substance". I am unsure if there is even a "correct" way to do this, because infinitesimally small things don't really exist when we look at anything in the real world.

So I guess the brightness factor that you end up multiplying with will to some degree depend on the supersampling factor, but there's probably some arbitrary fudge factor in there as well, as a matter of taste. Any resulting colours above the maximum intensity will have to be clamped.


(Another buzzword that could be relevant for rendering such gossamer structures is "gamma correction". In case the main strands / fibers look alright, but they somehow taper off wrongly, then you'll probably have to worry about this.)
Logged
youhn
Fractal Molossus
**
Posts: 696


Shapes only exists in our heads.


« Reply #4 on: April 26, 2014, 09:17:32 PM »

See http://www.ipol.im/pub/art/2011/g_iics/ for intro, explanation and example implementation.

I think we should use some kind of weighted average, which favors light. Real world blurring which our eyes do, seem to expand the most lit shapes and points. I'm not aware of any example that use this idea.
Logged
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #5 on: April 27, 2014, 06:26:09 AM »

When did gamma correction become a buzzword instead of a technical requirement?

If you want to show a linear colour ramp, let's say 0 to 255, to the user, how will you make sure it's displayed correctly without accounting for the display's nonlinear response?

This matters a lot in the context of anti-aliasing because, let's say you average out some pixels in your 3x3 grid, and it comes out to 33% visibility. You can't just use this 33% directly, because it's going to get squashed down by the display gamma. So you gamma correct it before display, and then your AA looks as it should.
« Last Edit: April 27, 2014, 06:29:22 AM by lycium » Logged

lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #6 on: April 27, 2014, 06:34:50 AM »

There is BTW an analogous process, reverse gamma correction, if you want to get a linear intensity value from a BMP, JPEG etc image (since they are typically encoded with sRGB 2.2 gamma).
Logged

Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #7 on: April 27, 2014, 12:35:40 PM »

See http://www.ipol.im/pub/art/2011/g_iics/ for intro, explanation and example implementation.

That seems to be a paper about upsizing images, not downsizing them, right?

Quote
I think we should use some kind of weighted average, which favors light. Real world blurring which our eyes do, seem to expand the most lit shapes and points. I'm not aware of any example that use this idea.

I think that is due to blooming (http://en.wikipedia.org/wiki/Bloom_%28shader_effect%29), and not directly related to anti-alias (i.e. you don't have to supersample in order to implement this). It is a quite common effect in 3D games and demos. I also did some experiments with this in Fragmentarium. It makes most sense if you have a 3D lightning scheme, where colors with values >1 (HDR) are produced, for instance specular highlights.

This matters a lot in the context of anti-aliasing because, let's say you average out some pixels in your 3x3 grid, and it comes out to 33% visibility. You can't just use this 33% directly, because it's going to get squashed down by the display gamma. So you gamma correct it before display, and then your AA looks as it should.

True, resizing should take gamma correction into account (I have written a bit about that here: http://blog.hvidtfeldts.net/index.php/2012/08/gamma-correction/). But even though it is simple to implement, is still largely ignored - for instance the major browsers still do no do it.
Logged
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #8 on: April 27, 2014, 01:19:03 PM »

What would really be some sci-fi stuff is if we could have displays which simply take linear input with high precision (not necessarily dynamic range) and properly adapt to each panel internally as part of the display's processing, possibly taking into account ambient lighting / whitepoint etc.

Well, first thing would be getting 10 bit per primary as standard, and yes, getting all our software to be gamma space aware would be a very reasonable compromise for now tongue stuck out
Logged

hobold
Fractal Bachius
*
Posts: 573


« Reply #9 on: April 27, 2014, 02:03:06 PM »

When did gamma correction become a buzzword instead of a technical requirement?

Not sure if there is a general definition of the term "buzzword", but here is how I am using the word whenever I don't know anything about the knowledge background of the persons I am communicating with: a buzzword is something that will add complexity to the true answer. But I don't want to overload a simple, basic answer with too much detail. I don't want to discourage a person who has already taken the courageous step of making their lack of knowledge public by asking a question.

I value your expertise, lycium; your presence is a boon to these forums. We just don't practice the same style of debate. smiley
Logged
Gore Motel
Forums Newbie
*
Posts: 4



WWW
« Reply #10 on: April 27, 2014, 04:15:18 PM »

Thank you everyone for taking the time to read the topic and replying.

See http://www.ipol.im/pub/art/2011/g_iics/ for intro, explanation and example implementation.

I think we should use some kind of weighted average, which favors light. Real world blurring which our eyes do, seem to expand the most lit shapes and points. I'm not aware of any example that use this idea.

Thanks, I will try to read it, but that page is making me cry.

I don't know anything about the knowledge background of the persons I am communicating with

I didn't know anything about fractals till about a week ago. And I always hated maths, so there you have it. This is just a fun little project I'm working on when I have some free time.

I coded some more and came up with what I think is supersampling. And I believe that I am doing some sort of gamma correction. Here is a noisy image, without supersampling (click the thumbnails for larger versions):



Here is the same drawing (same resolution, number of iterations and functions and variations used) with 2x supersampling:



It has less noise, but I think I'm losing too much detail. Here is the drawing again, with 2x supersampling and a larger gamma value:



Is this supersampling?
Logged
youhn
Fractal Molossus
**
Posts: 696


Shapes only exists in our heads.


« Reply #11 on: April 27, 2014, 07:07:14 PM »

That seems to be a paper about upsizing images, not downsizing them, right?

Ahum. You're right. And worse:

"Noisy Images
A limitation of the method is the design assumption that noise in the input image is negligible. If noise is present, it is amplified by the deconvolution. The sensitivity to noise increases with the PSF standard deviation σh, which controls the deconvolution strength. Similarly, if σh is larger than the standard deviation of the true PSF that sampled the image, then the method produces significant oscillation artifacts because the deconvolution exaggerates the high frequencies."

So much for my good advice. I've got it from the wikipedia page on supersampling though:

http://en.wikipedia.org/wiki/Supersampling
Logged
youhn
Fractal Molossus
**
Posts: 696


Shapes only exists in our heads.


« Reply #12 on: April 27, 2014, 09:57:17 PM »

I've downscaled your original image from 800px to 400px two times. First one is without any form of supersampling/anti-aliasing/interpolating. The next 3 use different algorithm to interpolate the new pixel values from the old pixels. Everything is scaled down in the Gimp, which is open source software. This means we can download the code and find out how it is coded.


1. No special algorithm


2. Lineair interpolation


3. Cubic interpolation


4. Sinc (Lanczos3) interpolation

The more noisy the image is, the bigger the original image should be compared to the desired resolution. On raw images from Kalles Fraktaler I render at 7680px and scale it down to 1920px. Other source advise to downsample from 9 times bigger images. The more the better .... ?

References:
http://www.gimp.org/source/
http://pippin.gimp.org/image_processing/chap_resampling.html
http://docs.gimp.org/en/gimp-layer-scale.html
Logged
Gore Motel
Forums Newbie
*
Posts: 4



WWW
« Reply #13 on: April 28, 2014, 12:52:16 PM »

Out of those images, the one with linear interpolation looks the best to me. I'll go ahead and see how it can be implemented, thanks!

And 7680px? How long does that take to render?
Logged
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #14 on: April 28, 2014, 10:15:02 PM »

Out of those images, the one with linear interpolation looks the best to me. I'll go ahead and see how it can be implemented, thanks!

There really is no need. All you have to do is to render at higher resolution and then average the values. For instance, if you want a 100x100 image at 3x supersampling, just render an image at 300x300, divide it into 3x3 blocks and average those. That is the simplest way, and should produce nice results (and yes, you can improve on this by taking gamma-correction into account, jittering sampling patterns, weighting samples and so on, but that is not where you should start).

Interpolation is not necessary for downsizing and can produce unexpected results. For instance, Paint.NET produces a completely black image, if I resize the attached image from 27x27 to 9x9 using bilinear interpolation.




* gray pattern2.png (0.24 KB, 27x27 - viewed 3887 times.)
Logged
Pages: [1] 2   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Simple anti aliasing / supersampling help! Programming richardrosenman 6 2298 Last post April 15, 2011, 02:41:21 AM
by Softology
perhaps a formula worth to implement in 3d ? General Discussion cKleinhuis 1 1260 Last post July 28, 2012, 11:24:58 AM
by DarkBeam
Some useful formulas to implement on new mdb 3d versions? Mandelbulb 3d « 1 2 » Lalla 15 5159 Last post August 26, 2012, 04:21:18 PM
by Alef
Mandelbrot: Supersampling vs Multisampling? Programming « 1 2 » Kalles Fraktaler 21 9412 Last post April 05, 2014, 08:03:34 PM
by SeryZone
Supersampling: RGSS, flipquad, ?? Programming murkey 3 3254 Last post February 26, 2015, 07:26:17 PM
by dom767

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.229 seconds with 24 queries. (Pretty URLs adds 0.015s, 2q)