Logo by RedBug - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Follow us on Twitter
 
*
Welcome, Guest. Please login or register. March 28, 2024, 03:58:03 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: 1 [2] 3 4   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: SuperSonic  (Read 4172 times)
Description: Mandelbrot
0 Members and 1 Guest are viewing this topic.
hapf
Fractal Lover
**
Posts: 219


« Reply #15 on: April 30, 2016, 11:58:13 AM »

applying a lowpass filter prior to downsampling is sort of an axiom of image processing.
Hm. The low pass filter is required before sampling if the signal to be sampled has frequencies above half the sampling rate so these are removed. Fractals can not be filtered before they are sampled since they don't exist before the software samples them.   grin Once sampled the sampling theorem tells us the samples contain very likely aliasing since fractals have infinite detail. To minimise this supersampling is employed which moves aliasing to higher frequencies. The following downsampling filter is a low pass filter and does what the blurring is supposed to do, but in a better way by preserving more real detail.  wink
Quote
if you imagine something like 256 samples (16x supersampling) combining into one pixel in the final image, you definitely could think of it in terms of information loss.  this is simply the nature of the beast.  employing some pre-sampling filtering is about controlling how that information will combine to produce the end result.  thinking of it in terms of information loss is just not really the right way to think of it.
as far as escape time coloring, it simply requires a ton of sampling to get smooth consistent coloring free of spurious garbage, moreso than simple distance shading.  i suppose one could post some example comparison shots if they were so inclined.
That is not my experience. I use continuous escape time colouring and unless I chose bad colour maps with too quickly changing colours there is no special aliasing problem I would not have with DE as well.
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #16 on: April 30, 2016, 12:35:57 PM »

somehow this

Quote from: hapf
I have not used blur so far since blur reduces detail anywhere, needed or not.

becomes this

Quote from: hapf
The low pass filter is required before sampling if the signal to be sampled has frequencies above half the sampling rate so these are removed. Fractals can not be filtered before they are sampled since they don't exist before the software samples them.  Once sampled the sampling theorem tells us the samples contain very likely aliasing since fractals have infinite detail. To minimise this supersampling is employed which moves aliasing to higher frequencies. The following downsampling filter is a low pass filter and does what the blurring is supposed to do, but in a better way by preserving more real detail.

 snore  it is rather difficult and fruitless to engage in conversation with someone who approaches conversation in such a cagey manner.  I was about to respond to some of that until i thought better of it.

Quote from: hapf
I use continuous escape time colouring and unless I chose bad colour maps with too quickly changing colours there is no special aliasing problem I would not have with DE as well.

right, escape time coloring simply introduces more opportunity for aliasing to present, depending on what exactly you do with it.  but again, as is becoming apparent with your posts, you knew that already, so im not sure what your game is of making these cagey little comments, attempting to bait people into offering a response i guess... talk about a fruitless endeavor.   roll eyes
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #17 on: April 30, 2016, 01:06:25 PM »

somehow this
becomes this
 snore  it is rather difficult and fruitless to engage in conversation with someone who approaches conversation in such a cagey manner.  I was about to respond to some of that until i thought better of it.
I'm sorry but you wrote
Quote
indeed, proper downsampling involves a blur operation first.
I interpreted that as applying some blur filter and then a downsampling filter. Since a filter designed for downsampling has the low pass part already integrated in it I did not see the point of using a separate blur filter beforehand and I don't use that. If that is not what you meant then forget my remarks. I have no intention to be cagey.
Logged
billtavis
Safarist
******
Posts: 96


WWW
« Reply #18 on: May 01, 2016, 11:45:15 PM »

Inspired by this thread, I made some anti-aliasing tests: http://www.fractalforums.com/images-showcase-%28rate-my-fractal%29/anti-aliasing-comparisons-%28super-sampling%29/
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #19 on: May 03, 2016, 10:26:33 AM »

Thanks for the test examples. The result is as expected. When there is excessive aliasing and sufficient oversampling is not feasible/practical then applying additional preblurring before downsampling can improve results. It comes at a price. Additional loss of detail and sharpness in areas with less severe or no aliasing that can't profit from preblurring. That's why I would prefer adaptive pre blurring in such a case. In your case there are only the bent lines to the right that have no aliasing and the massive aliasing on the left. In a fractal the situation is usually more mixed with varying amounts of aliasing all over the place.
Logged
stardust4ever
Fractal Bachius
*
Posts: 513



« Reply #20 on: May 03, 2016, 01:22:20 PM »

Well this has been interesting. First off, @Hapf, those grayscale renders of deep julia Mandelbrots are breathtaking. Been practicing for the contest, no doubt?

Secondly, I have been toying with Bilinear, Bicubic, and Lancos scaling with GIMP in order to judge the efficacy of filters. When using integer ratios, there isn't much perceived difference between the three, in my opinion. Sometimes I think the fractal grain or noise floor is carried over slghtly more with the Lancos. What I do notice is that the file size is about ~10% less for Bilinear and ~10% more with Lancos, compared to Bicubic as a baseline. Why PNG compression is more efficient with the simpler scalers, I do not know. The bilinear filter supplied with GIMP is not low quality by any means when sticking to integer ratios, such that each pixel on the output samples it's value from an exact square grid of input pixels.

I have not yet contemplated the notion of subpixel gassian blurring prior to the downscale and what if any effect it will have. Some areas within the Mandelbrot set as well as the abs fractals are extremely noisy.

In Mandel Machine, I have been using 23040x23040 for square renders and 30720x17280 for 16x9 renders for 530 megapixels, just a hair under the hard limit of .5 binary gigapixels. For 4x3, I have been using 25600x19200 (492 megapixels). Generally the 30720x17280 scales 8x8 down to "4k" 3840x2160 or 4x4 down to "8k" or 7680x4320. When I use Kalles Fraktaler, the memory usage is less efficient so it occupies too much of my 16Gb of desktop RAM, sometimes causing write caching as RAM usage is momentarily increased when calculating reference pixels. As a result, I am forced to drop down to "16k" or "24k" resolutions for KF. Not a big deal really as that gives me 6x6 sampling down to "4k".

all the junk people post on deviantart for instance, they always add a note in the description instructing you to click on the image to view it full size so you can see all the details.  when i see the initial scaled down image i usually think, hey, that looks pretty good.  then when i click to view it at full res, my eyes spontaneously combust as my senses are overloaded with all that horrible awful aliasing and distinct LACK of detail.  as the blood pours from my eyes i quickly try to escape back to the scaled down image as fast as humanly possible..   hurt
Not mine, at least not the more recent ones using perturbation, that didn't take weeks in to complete in FX! afro

Case in point, my "Electronic Tapestries" render was taken from an extremely noisy and distorted area of the Quasi Burning Ship 3rd, stretched to an extreme 55,000:1 pixel aspect ratio. Feel free to download my extremely clean "4k" render, a direct 8x8 subsample from the 30720x17280 pixel source. This area was extremely noisy, yet the fractal looks undeniably smooth with super AA applied. Originally the target size was 7680x4320, but this size PNG exceeded the upload limit so I used a 3840x2160 size instead. No "Gaussian blur" needed!

Deviantart page (1600x900 preview):
http://stardust4ever.deviantart.com/art/Electronic-Tapestries-603028636

Direct download of "4k" super AA render:
http://orig00.deviantart.net/5f20/f/2016/105/8/a/electronic_tapestries_by_stardust4ever-d9z0zvg.png
« Last Edit: May 03, 2016, 01:44:09 PM by stardust4ever » Logged
billtavis
Safarist
******
Posts: 96


WWW
« Reply #21 on: May 03, 2016, 04:14:49 PM »

Quote
Additional loss of detail and sharpness in areas with less severe or no aliasing that can't profit from preblurring. That's why I would prefer adaptive pre blurring in such a case.
If there is a loss of sharpness, then decrease the blur amount. I would personally never go above 0.5x the upres amount, and yeah I agree that my tests are on the soft side. My default is actually only 0.2, so that would mean a blur of 0.8 on image that you are scaling down by 4. A blur that small is barely visible even before scaling down, but it makes a world of difference on the aliasing. Adaptive blurring is an interesting idea... however it comes at a cost. A Gaussian blur that is uniform over the entire image is separable into x and y, and many other tricks can be used for massive speed increases. Most photo editing software makes use of this. A non-uniform blur, however, is not separable, nor can the same tricks be used. AFAIK, you are stuck literally convolving each pixel with a full 2d Gaussian kernel.
Quote
Case in point, my "Electronic Tapestries" render was taken from an extremely noisy and distorted area of the Quasi Burning Ship 3rd, stretched to an extreme 55,000:1 pixel aspect ratio. Feel free to download my extremely clean "4k" render, a direct 8x8 subsample from the 30720x17280 pixel source. This area was extremely noisy, yet the fractal looks undeniably smooth with super AA applied. Originally the target size was 7680x4320, but this size PNG exceeded the upload limit so I used a 3840x2160 size instead. No "Gaussian blur" needed!
I think it looks great! It would be interesting to see if you could get a similar result with a smaller initial render plus a slight pre-blur.
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #22 on: May 03, 2016, 05:04:32 PM »

Well this has been interesting. First off, @Hapf, those grayscale renders of deep julia Mandelbrots are breathtaking. Been practicing for the contest, no doubt?
Thanks. No practicing, though. Most fractals are a byproduct of program debugging, implementing new features or testing new speed ups etc.
Quote
Secondly, I have been toying with Bilinear, Bicubic, and Lancos scaling with GIMP in order to judge the efficacy of filters. When using integer ratios, there isn't much perceived difference between the three, in my opinion. Sometimes I think the fractal grain or noise floor is carried over slghtly more with the Lancos. What I do notice is that the file size is about ~10% less for Bilinear and ~10% more with Lancos, compared to Bicubic as a baseline. Why PNG compression is more efficient with the simpler scalers, I do not know.
Probably simply the result of the entropy of the different images (e.g. Lanczos preserves more high frequency detail).
Quote
In Mandel Machine, I have been using 23040x23040 for square renders and 30720x17280 for 16x9 renders for 530 megapixels, just a hair under the hard limit of .5 binary gigapixels. For 4x3, I have been using 25600x19200 (492 megapixels). Generally the 30720x17280 scales 8x8 down to "4k" 3840x2160 or 4x4 down to "8k" or 7680x4320. When I use Kalles Fraktaler, the memory usage is less efficient so it occupies too much of my 16Gb of desktop RAM, sometimes causing write caching as RAM usage is momentarily increased when calculating reference pixels. As a result, I am forced to drop down to "16k" or "24k" resolutions for KF. Not a big deal really as that gives me 6x6 sampling down to "4k".
I'm currently limited to 32K because of RAM needs. I have many support data structures for deblobbing etc. that also need quite some memory. But beyond 32K would often be difficult anyway as the number of references needed goes up and deblobbing takes longer and longer.
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #23 on: May 03, 2016, 05:09:13 PM »

A non-uniform blur, however, is not separable, nor can the same tricks be used. AFAIK, you are stuck literally convolving each pixel with a full 2d Gaussian kernel.I
That would not bother me given that computing the fractal itself in my case often takes several hours or more. For animations with short render times per frame it's more of a concern, though.
Logged
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #24 on: May 04, 2016, 02:21:35 AM »

Somewhat worrying amount of misinformation in this thread... is anyone here a graphics programmer, or someone who studied computer graphics (esp. signal processing)?
Logged

billtavis
Safarist
******
Posts: 96


WWW
« Reply #25 on: May 04, 2016, 03:25:03 AM »

Quote
is anyone here a graphics programmer, or someone who studied computer graphics (esp. signal processing)?
I'm self-taught at graphics programming, however I've worked professionally in 3d animation for years. So I do know what I'm talking about, although I welcome any corrections you might have.
To note: in 3d animation (which is why my focus is always on efficiently reaching acceptable results smiley ) the way it's done is that a sampling is taken for a given pixel, and if that sample contains information above a certain contrast threshold, more samples are taken up to a user-defined limit. More samples is like it's being rendered larger and scaled down. All sorts of filers can be used, but Gaussian is a good all-around filter. I use Mitchell if there are pesky crawling diagonals. The way I understand it, using the Gaussian filter with super-sampling to decide the value of a single pixel is analogous to giving an entire aliased image a slight blur and scaling it down. Although, in the 3d render, the two steps are one in the same and the samples are not on a perfect grid so that improves quality as well.
This is how they do it in Blender: https://www.blender.org/manual/render/blender_render/antialiasing.html
With fractal rendering, theoretically adaptive super-sampling could be done the same way and would produce nice results... although this would take quite a bit of programming knowledge to get something going, whereas the technique I advocated in this thread can be implemented by anyone who can download free software.
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #26 on: May 04, 2016, 03:49:41 AM »

Somewhat worrying amount of misinformation in this thread... is anyone here a graphics programmer, or someone who studied computer graphics (esp. signal processing)?

you have all the answers but you arent going to enlighten us?  this seems to be a common theme on this site lately..
Logged
xenodreambuie
Conqueror
*******
Posts: 124



WWW
« Reply #27 on: May 04, 2016, 04:17:09 AM »

Here are a few observations.
1. Unless I'm mistaken, pre-blur is useful for separating out the envelope and sampling, so you can take one or a small number of samples after the blur. If you're using all available samples in the full size image, it's just convolving one filter with another, so why not use a single one that does what it should, eg Lanczos or Mitchell for good spectral and visual results.

2. For escape-time or similar rendering, it's more efficient to do built-in supersampling for each pixel, and not complicated. Then you don't have the memory limits or the extra work of post-processing.

3. Built-in supersampling is also more flexible for allowing irregular sample positions or adaptive methods. I've found adaptive sampling to work well most of the time but with high density color patterns it doesn't do as well as full supersampling.
Logged

Regards, Garth
http://xenodream.com
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #28 on: May 04, 2016, 04:35:18 AM »

Here are a few observations.
1. Unless I'm mistaken, pre-blur is useful for separating out the envelope and sampling, so you can take one or a small number of samples after the blur. If you're using all available samples in the full size image, it's just convolving one filter with another, so why not use a single one that does what it should, eg Lanczos or Mitchell for good spectral and visual results.

2. For escape-time or similar rendering, it's more efficient to do built-in supersampling for each pixel, and not complicated. Then you don't have the memory limits or the extra work of post-processing.

3. Built-in supersampling is also more flexible for allowing irregular sample positions or adaptive methods. I've found adaptive sampling to work well most of the time but with high density color patterns it doesn't do as well as full supersampling.

Nailed it smiley
Logged

lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #29 on: May 04, 2016, 04:49:11 AM »

you have all the answers but you arent going to enlighten us?  this seems to be a common theme on this site lately..

Anytime someone wants to know about that, just ask and I've got a million (totally standard) references; I teach something like 5+ people about rendering a year, one on one mostly. How about something like this, totally free and sitting online since 2004 or so: http://www.pbrt.org/chapters/pbrt_chapter7.pdf

To say it differently, it's almost like people should read some books, before giving out advice on a topic as complex as anti-aliasing.
Logged

Pages: 1 [2] 3 4   Go Down
  Print  
 
Jump to:  


Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.253 seconds with 23 queries. (Pretty URLs adds 0.01s, 2q)