Logo by AGUS - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Visit the official fractalforums.com Youtube Channel
 
*
Welcome, Guest. Please login or register. April 27, 2024, 04:25:37 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: 1 2 [3] 4   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: SuperSonic  (Read 4199 times)
Description: Mandelbrot
0 Members and 1 Guest are viewing this topic.
stardust4ever
Fractal Bachius
*
Posts: 513



« Reply #30 on: May 04, 2016, 05:01:57 AM »

I'm currently limited to 32K because of RAM needs. I have many support data structures for deblobbing etc. that also need quite some memory. But beyond 32K would often be difficult anyway as the number of references needed goes up and deblobbing takes longer and longer.
I need your help with deblob settings. I have found a location that I would like to submit (rendered at "32k" but the final result will be scaled to 4k with 8x8 antialias) This image contains a huge amount of infinite spirals and I have found Mandel Machine is cutting off before all the spirals are finished. The net result is many of the smaller spirals have black dots in the center while the larger ones do not, appearing solid gray with antialias. There are thousands of such spirals within the render and I want to eliminate all of the black dots. The average iteration depth in the image is around 3 millions, but I set the high bailout to 100 millions so the centers of the spiral areas will be filled in. I want my submission image to be perfect with no black dots. I don't care if it takes ten thousand references over several days to fill in the holes. I want no black dots visible anywhere in my render and I am sure the 100 million bailout will be sufficient with this regard.

If you want I can PM you a sample image but I'd rather not reveal it to the world yet.
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #31 on: May 04, 2016, 08:36:21 AM »

Somewhat worrying amount of misinformation in this thread... is anyone here a graphics programmer, or someone who studied computer graphics (esp. signal processing)?
What misinformation? And yes, I was involved in image processing and computer graphics when I was at the university.
Logged
billtavis
Safarist
******
Posts: 96


WWW
« Reply #32 on: May 04, 2016, 08:44:03 AM »

Quote
1. Unless I'm mistaken, pre-blur is useful for separating out the envelope and sampling, so you can take one or a small number of samples after the blur. If you're using all available samples in the full size image, it's just convolving one filter with another, so why not use a single one that does what it should, eg Lanczos or Mitchell for good spectral and visual results.
If your only means of anti-aliasing is downsizing the image in photo editing software, the pre-blur gives the effect of spreading the sampling outside of the area of the resulting pixel. You can see the tests I did here:
http://www.fractalforums.com/images-showcase-%28rate-my-fractal%29/anti-aliasing-comparisons-%28super-sampling%29/
The pre-blur absolutely improved the anti-aliasing, because a blur with sigma 0.5 will extend out farther than a distance of 0.5, and will therefore be influenced by surrounding pixels. Yes, non-uniform adaptive super-sampling is great, but not just anyone can do it. If someone needs to use a photo-editing software to perform their anti-aliasing, I gave them the best way to do that. This link is an excellent guide to the subject: http://therefractedlight.blogspot.com/2010/12/problem-of-resizing-images.html
They state "According to the Nyquist theorem, our samples need to be more than double the frequency of the original signal to avoid artifacts, but when we make an image smaller, we greatly increase the frequency of our patterns. So what we need to do is to blur the image first — before downsizing — so that the Nyquist theorem still holds for our final image. In more technical terms, an image needs to be put through a low-pass filter before being down-sampled — the high-frequency components of the image have to be eliminated first by blurring."
Quote
To say it differently, it's almost like people should read some books, before giving out advice on a topic as complex as anti-aliasing.
Well, how about you use your advanced knowledge to actually help us. Like how do we compute the ideal amount of pre-blur when performing anti-aliasing in this manner? As the blog post states "How an image ought to be blurred prior to downsizing is a mathematically complex subject, and certainly the optimal blurring algorithms are not found in Photoshop. But we could experiment with Gaussian Blur, although choosing the Gaussian radius may be a bit problematic."
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #33 on: May 04, 2016, 08:52:22 AM »

Here are a few observations.
1. Unless I'm mistaken, pre-blur is useful for separating out the envelope and sampling, so you can take one or a small number of samples after the blur. If you're using all available samples in the full size image, it's just convolving one filter with another, so why not use a single one that does what it should, eg Lanczos or Mitchell for good spectral and visual results.
Hence my remark that the blurring is built into the downsampling filter when it does what it's supposed to do.
Quote
2. For escape-time or similar rendering, it's more efficient to do built-in supersampling for each pixel, and not complicated. Then you don't have the memory limits or the extra work of post-processing.
I have not looked into individual pixel adaptive supersampling yet. It would help with memory but not provide an image set of different resolutions unless one repeats it at different resolutions. And for deblobbing it creates issues, I would think.
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #34 on: May 04, 2016, 09:16:42 AM »

They state "According to the Nyquist theorem, our samples need to be more than double the frequency of the original signal to avoid artifacts, but when we make an image smaller, we greatly increase the frequency of our patterns. So what we need to do is to blur the image first — before downsizing — so that the Nyquist theorem still holds for our final image. In more technical terms, an image needs to be put through a low-pass filter before being down-sampled — the high-frequency components of the image have to be eliminated first by blurring.
Yes. correct. A downsampling filter has blurring built into it. But the usual ones are designed for "normal" images, not fractals with excessive aliasing. So additional pre blurring is an option when using a standard downsampling filter.
Logged
stardust4ever
Fractal Bachius
*
Posts: 513



« Reply #35 on: May 04, 2016, 09:34:17 AM »

If your only means of anti-aliasing is downsizing the image in photo editing software, the pre-blur gives the effect of spreading the sampling outside of the area of the resulting pixel. You can see the tests I did here:
http://www.fractalforums.com/images-showcase-%28rate-my-fractal%29/anti-aliasing-comparisons-%28super-sampling%29/
The pre-blur absolutely improved the anti-aliasing, because a blur with sigma 0.5 will extend out farther than a distance of 0.5, and will therefore be influenced by surrounding pixels. Yes, non-uniform adaptive super-sampling is great, but not just anyone can do it. If someone needs to use a photo-editing software to perform their anti-aliasing, I gave them the best way to do that. This link is an excellent guide to the subject: http://therefractedlight.blogspot.com/2010/12/problem-of-resizing-images.html
They state "According to the Nyquist theorem, our samples need to be more than double the frequency of the original signal to avoid artifacts, but when we make an image smaller, we greatly increase the frequency of our patterns. So what we need to do is to blur the image first — before downsizing — so that the Nyquist theorem still holds for our final image. In more technical terms, an image needs to be put through a low-pass filter before being down-sampled — the high-frequency components of the image have to be eliminated first by blurring."Well, how about you use your advanced knowledge to actually help us. Like how do we compute the ideal amount of pre-blur when performing anti-aliasing in this manner? As the blog post states "How an image ought to be blurred prior to downsizing is a mathematically complex subject, and certainly the optimal blurring algorithms are not found in Photoshop. But we could experiment with Gaussian Blur, although choosing the Gaussian radius may be a bit problematic."
You are comparing apples to oranges. When audio sampling from analog source, a typical non-audiophile ADC will hold the current instantaneous value of the waveform and record it as a numerical value. This transforms the analog signal into a stair step. Any frequency above half the sample rate will develop artifacts expressed as off key notes below half the sample rate. It is absolutely necessary to use a low pass filter tuned to half the sample frequency to completely eliminate artifacts. For instance, an ultrasonic note, say 40kHz fed directly into an ADC operating at 44.1kHz will produce a very annoying audible moire pattern at 4.1KHz. So it is absolutely necessary to install a low pass filter on the analog audio input, typically 20kHz, so that no artifacts are present in the 44.1kHz recording.

The digital equivalent would be using nearest neighbor to downsample an image. This is essentially taking the upper left most input pixel for each output pixel, and assigning this input pixel to the output. A better audio analogy to what we are doing in digital image domain would be to capture audio masters at a very high sample rate, say 192kHz 24bit, then apply any post processing effects to the recording and scale the resultant waveform down to 44.1kHz 16-bit for mastering audio CDs or mp3 downloads for public consumption.

Gaussian blur is essentially a low pass 2D filter for digital images, but is IMO unnecessary for renders. Any noise or moire pattern that exists after the source image is downsampled by a factor of 2, 3, 4, 6, 8 or so on would likely not benefit much from subpixel blurring because said artifacts are bigger than the output pixels. Suppose each output pixel sources it's color from a 4x4 grid of input pixels. Using Bilinear filter, each one of the 16 sub pixels gets equal influence to the output pixel. If a contrasting shape occupies a portion of the output pixel, the output pixel is weighted colored based on the proportion of sub pixels withing the shaded area. Apply a guassian blur of say radius 2 beforehand, and now these bordering sub pixels near the boundaries have varying influence on adjacent  output pixels. This only serves to soften the image, but again does nothing to preserve detail. If it is important that boundary sub pixels influence the resulting output pixels, then advanced scaling techniques like Bicubic or Lancos are used. This is important when scaling to non-integer ratios, but I have zoomed into images integer scaled with Bilinear, Bicubic, and Lancos and failed to notice an appreciable difference between samples when viewing pixels zoomed in 400%. However PNG compression in GIMP seems to have slightly higher compression efficacy when using Bilinear.
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #36 on: May 04, 2016, 10:13:49 AM »

Quote from: billtavis
Well, how about you use your advanced knowledge to actually help us.

meh, i wouldnt expect too much on this front.  ive observed this individual's interactions here before; hes more interested in trolling than making constructive contributions.
Logged
xenodreambuie
Conqueror
*******
Posts: 124



WWW
« Reply #37 on: May 04, 2016, 10:48:54 AM »

I have not looked into individual pixel adaptive supersampling yet. It would help with memory but not provide an image set of different resolutions unless one repeats it at different resolutions. And for deblobbing it creates issues, I would think.

The adaptive part is optional, since I don't believe it can be made perfect so you'd always need to choose when to use it. If you want different resolutions, you could render at the largest needed and downsize that for smaller images, if it takes too long to render a much smaller one separately. I haven't looked into the detail of implementing perturbation since I'm more interested in more general formulas, but if it needs much caching of details between pixels that might complicate implementation, or if you have to revisit pixels. I was assuming that it's feasible to do all the supersamples for a pixel and filter them before moving to the next pixel.
Logged

Regards, Garth
http://xenodream.com
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #38 on: May 04, 2016, 03:16:56 PM »

meh, i wouldnt expect too much on this front.  ive observed this individual's interactions here before; hes more interested in trolling than making constructive contributions.
That's hilarious mate... maybe have a look through my posts on this forum? I've been discussing antialiasing here since 2006 or something. I linked to a very very very good free chapter of PBRT, which no one seems to have looked at. Too bad.

Here's another link you guys can ignore: http://www.realtimerendering.com/blog/principles-of-digital-image-synthesis-now-free-for-download/

Again, this stuff is standard. I guess I'm not allowed to point out when misinformation is being shared and cite standard references, unless I write a little tutorial together with my post? *sigh*
Logged

billtavis
Safarist
******
Posts: 96


WWW
« Reply #39 on: May 04, 2016, 05:48:31 PM »

Quote
I linked to a very very very good free chapter of PBRT, which no one seems to have looked at. Too bad.
I looked through your reference. While it does not discuss anti-aliasing via scaling down images, it clearly states, "Another approach to eliminating aliasing that sampling theory offers is to filter (i.e., blur) the original function so that no high frequencies remain that can’t be captured accurately at the sampling rate being used."
Yup. But still there remains the question of how to compute the ideal blur amount? Perhaps it's one of those things that must always be tweaked depending upon the image.
Quote
Gaussian blur is essentially a low pass 2D filter for digital images, but is IMO unnecessary for renders.
Well, the results speak for themselves, both in my example thread and in the blog post I linked to. You can go on without it if you choose.
Quote
Yes. correct. A downsampling filter has blurring built into it.
Blurring may be "built-in" somewhat, but the results are clearly improved by doing an additional pre-blur, even if it is very small.

 educated
Here is an excellent academic reference that actually applies to the topic at hand (scaling down an image). I also took the time to quote the relevant passage (emphasis mine):
https://web.cs.wpi.edu/~matt/courses/cs563/talks/antialiasing/methods.html
Quote
Supersampling is basically a three stage process.

  • A continuous image I(x,y) is sampled at n times the final resolution. The image is calculated at n times the frame resolution. This is a virtual image.
  • The virtual image is then lowpass filtered
  • The filtered image is then resampled at the final frame resolution.
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #40 on: May 04, 2016, 05:57:39 PM »

lycium, yes you did post some interesting and informative links.  contrary to ignoring them, that is exactly what we want, information.  thank you for that.  no thanks for your caustic manner and horrible attitude however.  maybe in 2006 you added to conversations in mature, respectful, and helpful ways.  you are right im not familiar with what you may have posted then.  what ive seen of you in my time here however has been 100% what you are displaying now: heavy on the trolling, light on anything else.
Logged
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #41 on: May 04, 2016, 06:24:15 PM »

Remind me, who is the one making things personal in this thread? Was it really me? Are you really so "objective" that you can't see past my links and literally non-stop quest to teach absolutely everyone who'll listen about CG, just because you're somehow offended I dared to say there's misinformation in this thread, without a complete mini-tutorial? Just last week I taught 5-6 people how to program IFS renderers: http://www.meetup.com/spektrum/events/230378312/?gj=co2&rv=co2

Seriously, point that finger and 4 point back at you. I have the security of many people who actually know me and have benefited from my ridiculous desire to teach almost everything I know, and besides this are able to educate themselves (instead of blaming others for their ignorance in the face of the amazing resources we have these days on the internet). If you would simply change your attitude and say instead "hey lycium I've looked at this stuff you've linked and XYZ is unclear", you'd suddenly see a very different side of me.

That's the last of this personal nonsense from me. Hopefully someone gets something out of the Principles of Digital Image Synthesis book link in particular, being able to borrow that book from the university library was worth the tuition fee for me alone.
Logged

quaz0r
Fractal Molossus
**
Posts: 652



« Reply #42 on: May 04, 2016, 08:49:35 PM »

now that lycium has finished enriching our lives with his contributions, i look forward to any productive continuation of this discussion.

Quote from: billtavis
I looked through your reference. While it does not discuss anti-aliasing via scaling down images, it clearly states, "Another approach to eliminating aliasing that sampling theory offers is to filter (i.e., blur) the original function so that no high frequencies remain that can’t be captured accurately at the sampling rate being used."
Yup. But still there remains the question of how to compute the ideal blur amount? Perhaps it's one of those things that must always be tweaked depending upon the image.

basically it seems like all of this is a rather complex subject without a simple, definitive answer.  i currently use imagemagick (for better or worse) as my image library, so i was having another look at these pages,

http://www.imagemagick.org/Usage/filter/
http://www.imagemagick.org/Usage/filter/nicolas/

which indeed refreshes my feeling of "this is a rather complex subject" as opposed to "there is a simple, definitive answer."

Quote from: billtavis
Quote
Gaussian blur is essentially a low pass 2D filter for digital images, but is IMO unnecessary for renders.

Well, the results speak for themselves, both in my example thread and in the blog post I linked to. You can go on without it if you choose.

Quote
Yes. correct. A downsampling filter has blurring built into it.

Blurring may be "built-in" somewhat, but the results are clearly improved by doing an additional pre-blur, even if it is very small.

it seems like the proper course of action here would be to adjust the settings of the built-in blur directly if need be.  i dont recall actually seeing this functionality typically, though it looks like maybe imagemagick has it.  and maybe i missed it, but ive not seen it mentioned if resampling filters tend to adjust the blurring based on the original resolution and the target resolution, or if they use static defaults that you indeed should adjust manually?  even if they do adjust automatically, you are right, the case still remains that whatever they are or are not doing automatically does not always produce the desired result.

Quote from: billtavis
Here is an excellent academic reference that actually applies to the topic at hand (scaling down an image).

Quote
Supersampling is basically a three stage process.

A continuous image I(x,y) is sampled at n times the final resolution. The image is calculated at n times the frame resolution. This is a virtual image.
The virtual image is then lowpass filtered
The filtered image is then resampled at the final frame resolution.

indeed, while some folks assert that no lowpass filter should be involved, then maybe turn around and state that a lowpass filter should be involved but is already built in, or perhaps make allusions to some as yet unspecified universally-known and readily-available definitive answers to the topic at hand, when you actually do go searching for this information and discussions on the matter, what you find tends to be:

a) lots of information and discussions like what we both have referenced here, indicating, correctly or incorrectly, not only involvement of a lowpass filter in the process of resampling, but explicitly referencing the manual application of a lowpass filter prior to application of the resampling filter.

b) a lack of any clear, definitive answers of the sort actually being sought

and while we have plenty of egos here that apparently possess all the answers, these answers tend to be unforthcoming and conflicting, both with each other and with information found elsewhere.
« Last Edit: May 04, 2016, 09:26:02 PM by quaz0r » Logged
Chillheimer
Global Moderator
Fractal Schemer
******
Posts: 972


Just another fractal being floating by..


chilli.chillheimer chillheimer
WWW
« Reply #43 on: May 05, 2016, 10:07:58 AM »

To say it differently, it's almost like people should read some books, before giving out advice on a topic as complex as anti-aliasing.
people ask for help and advice here and share their experience.
you could just contribute and help in a friendly manner. there's no need to be snobbish.
you set the tone - expect the answer to have the same tone. this opnly leads to escalation.

maybe have a look through my posts on this forum? I've been discussing antialiasing here since 2006 or something. I linked to a very very very good free chapter of PBRT, which no one seems to have looked at. Too bad.

Here's another link you guys can ignore: http://www.realtimerendering.com/blog/principles-of-digital-image-synthesis-now-free-for-download/

Again, this stuff is standard. I guess I'm not allowed to point out when misinformation is being shared and cite standard references, unless I write a little tutorial together with my post? *sigh*
what's your problem?
everyone is supposed to know every link of every topic online?!
remember all posts of master-teacher lycium since 2006?!

just help, or stay out of the thread.

Logged

--- Fractals - add some Chaos to your life and put the world in order. ---
stardust4ever
Fractal Bachius
*
Posts: 513



« Reply #44 on: May 06, 2016, 04:23:04 AM »

Get a room you people, geeze...

One thing that popped into my head the other night was that raytracing software often employs random sampling during antialias so that each ray occupies a randomized position in the sub pixel. For example, a 4x4 grid is used to subsample each output pixel. Supposed a scene has a floor which is a chessboard tile pattern of black and white squares. Without antialias, every pxel is either white or black. These black and white pixels create very complex moire patterns as the tile floor escapes to to the vanishing point on the horizon. Suppose the target is 1920x1080 but you want to eliminate these noise and moire patterns by simply rendering big and downscaling. So you render the scene at 7680x4320 and downsample the image using 4x4 downsampling. But even at 7680x4320 resolution, the perfectly aligned pixel grid and the chessboard floor in the scene create interferance patterns, such that zoomed in, certain pixel clusters are more likely to line up with the black or white tiles. In the event that said patterns have a larger period than one pixel of the antialiased output, these patterns will be beeasily visible, creating strange curves and other shadowy shapes that should not exist in the image.

Because of the phenomenon that rendered scenes often have repeating textures, raytracing software often employs random sampling of AA subpixels. Again assuming a grid of 4x4 sub-pixels is used for antialias, instead of rendering a rigid grid such that each sub-pixel has even spacing, the each pixel is divided into sixteen (or more) squares. The actual ray computed is assigned a random location within each square. As a result, each pixel has a semi-random sampling that cancels out any possible recurring moire patterns. As a result, a fine grain effect occurs within noisy areas of the image, but said grain contains no recurring moire patterns. Thus, instead of seeing shapes that should not exist within the image, a much more aesthetically pleasing grain effect exists instead.

Most raytracing suites employ random sampling for antialias, yet no fractal rendering software that I know of does this. All fractal programs I am aware of render to a perfectly square grid. This would eliminate most moire patterns in areas of fractal detail which is highly repetitive, and replace it with soft film-like grain.

Low pass filters are only useful in the analog domain which has a near infinite pool of samples, but not in the digital domain after said conversion is made. When converting from analog to digital, offsetting the focus in a camera just so that the focal locus is equal to the spacing between the CCD cells, or a low pass filter at half the sample rate for recording audio, prior to sampling, to eliminate the presense of audible moire patterns in sound, makes sense.

Fractal rendering and raytracing is a purely digital domain, so there is no benefit to low passing the data because the sample pool is notarbitrarily large. Random sampling of sub-pixels as done in most raytracing suiteswould bea far more productive better strategy for eliminating moire in any computer generated images, including fractals.

EDIT: I have attached some sample renders made in Bryce 7.1. The first image is an 800x600 sample scene (mirror ball and chessboard floor) with no anti-aliasing applied. The second image is an oversampled render at 3200x2400 with no antialias and scaled down to 800x600 using Bilinear scaling in GIMP. The third image used the built in 4x4 Antialiasing preset using a random sampling algorithm. All were saved as Jpeg 85% (4.2.0) I will let the images speak for themselves.


* chessboard no AA.jpg (80.96 KB, 800x600 - viewed 92 times.)

* chessboard oversample AA.jpg (67.06 KB, 800x600 - viewed 81 times.)

* chessboard random AA.jpg (66.05 KB, 800x600 - viewed 98 times.)
« Last Edit: May 06, 2016, 04:59:14 AM by stardust4ever » Logged
Pages: 1 2 [3] 4   Go Down
  Print  
 
Jump to:  


Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.195 seconds with 23 queries. (Pretty URLs adds 0.012s, 2q)