Title: SuperSonic Post by: hapf on April 08, 2016, 11:18:34 AM (http://nocache-nocookies.digitalgott.com/gallery/18/9092_08_04_16_11_14_24.jpeg)
Locations like these are tough because they need so many secondary references. Embedded Julia sets within embedded Julia sets. http://www.fractalforums.com/index.php?action=gallery;sa=view;id=18899 (http://www.fractalforums.com/index.php?action=gallery;sa=view;id=18899) Title: Re: SuperSonic Post by: Dinkydau on April 15, 2016, 05:46:48 PM Nice. Black and white really suits it
Title: Re: SuperSonic Post by: hapf on April 15, 2016, 06:09:20 PM It does. But it also allows to quickly have a decent picture while multi colour often needs a lot of tweaking to look
satisfactory. Good anti aliasing gives such locations an almost surreal touch. Without it looks very rough. Title: Re: SuperSonic Post by: quaz0r on April 15, 2016, 07:42:30 PM for these do you supersample any or just use a sobel filter? i remember claude said he enjoys not doing any supersampling. i found the sobel filters in the image lib im using but i havent played around with it yet to figure out what parameters to pass it.
Title: Re: SuperSonic Post by: claude on April 15, 2016, 09:11:22 PM I've come round to the need for supersampling in dense locations to reduce Moiré artifacts...
Title: Re: SuperSonic Post by: hapf on April 16, 2016, 08:54:43 AM for these do you supersample any or just use a sobel filter? i remember claude said he enjoys not doing any supersampling. i found the sobel filters in the image lib im using but i havent played around with it yet to figure out what parameters to pass it. I don't think that any (linear) filters can replace supersampling for fractals. Too much loss of fine detail if you want to get rid of the aliasing this way. I do 32K versions for massive oversampling for 4K and 8K. But even that is not enough for some locations that are too dense with structure. Here only colouring that tones down aliasing helps.Concerning leaving the aliasing in it is an option with some locations and colouring choices that might look "better" that way than with anti aliasing. A subjective esthetic choice. Title: Re: SuperSonic Post by: quaz0r on April 16, 2016, 02:00:35 PM hear hear :beer: i like to go overboard just to make sure its as good as it can be. i used to use 16x supersampling even on 4k images but then i decided 12x or sometimes even 8x is sufficient :D using that much helps the most when you use the escape time to pick colors or such.
Title: Re: SuperSonic Post by: Dinkydau on April 20, 2016, 07:30:18 PM The amount of supersample is very limited by the software, usually.
Title: Re: SuperSonic Post by: quaz0r on April 21, 2016, 02:31:57 AM bummer. :)
Title: Re: SuperSonic Post by: hapf on April 23, 2016, 11:31:15 AM hear hear :beer: i like to go overboard just to make sure its as good as it can be. i used to use 16x supersampling even on 4k images but then i decided 12x or sometimes even 8x is sufficient :D using that much helps the most when you use the escape time to pick colors or such. 32K is not necessarily overkill. For the following location it's about enough for 2K but not enough for 4K! That would apparently need 64K, and 8K 128K. :crazyeyes:(http://nocache-nocookies.digitalgott.com/gallery/18/9092_23_04_16_11_25_47.jpeg) http://www.fractalforums.com/index.php?action=gallery;sa=view;id=18961 (http://www.fractalforums.com/index.php?action=gallery;sa=view;id=18961) Title: Re: SuperSonic Post by: quaz0r on April 23, 2016, 12:44:51 PM yeah, so 16x. i definitely approve :D
Title: Re: SuperSonic Post by: billtavis on April 29, 2016, 06:52:18 PM just wanted to throw in my two cents that adding a slight Gaussian blur prior to downsizing helps a lot. A small radius blur at 4x bigger is going to be much faster to render than no blur at 8x bigger. It would be worth doing some side-by-side tests to see if there is a visual difference. I'm not sure exactly how to find the optimal blur radius other than trial and error — if it is too large the resulting image will be too soft, obviously.
By combining a small Gaussian blur with a better resizing filter, like Lanczos, I've never had a need to go above 6x supersampling, and often 4x is plenty good. Title: Re: SuperSonic Post by: quaz0r on April 29, 2016, 07:09:08 PM indeed, proper downsampling involves a blur operation first. i use a gaussian blur prior to downsampling also. and yeah, ive not been sure either what the exact blur settings should be. ive been meaning to ask around if anyone has any thoughts on that. what ive been doing is to set the blur equal to half the supersampling amount. it seems to turn out to my taste anyway. and yeah, a good sampling filter is also a must. ive taken a liking to and have been using Jinc. i find the simpler and more common ones to give subpar results, especially when you are taking the loads of extra time it takes to render that extreme amount of supersampling.
as far as how much supersampling to use, well, people seem to have different tolerance thresholds for what they think is perfect or even good enough. if you are rendering just based on DE, less supersampling is needed to get pretty good results. if you are coloring based on escape time, well personally i find anything <= 8x subpar. 12x seems like it is probably always good enough, but i often do 16x ...well, just because i can. :) honestly, ive observed that a lot of people who are into rendering are resistant, certainly at first, to the notion of supersampling, especially the notion of doing it in large amounts, simply because if they were to accept it, it would mean A) that theyve basically been doing it wrong thus far, and B) if they were to do it properly going forward, it is going to take a whole lot longer to accomplish less than they are used to. :angel1: all the junk people post on deviantart for instance, they always add a note in the description instructing you to click on the image to view it full size so you can see all the details. when i see the initial scaled down image i usually think, hey, that looks pretty good. then when i click to view it at full res, my eyes spontaneously combust as my senses are overloaded with all that horrible awful aliasing and distinct LACK of detail. as the blood pours from my eyes i quickly try to escape back to the scaled down image as fast as humanly possible.. :hurt: Title: Re: SuperSonic Post by: hapf on April 30, 2016, 08:50:35 AM I have not used blur so far since blur reduces detail anywhere, needed or not. Blur could be helpful when applied only where there is so much detail that oversampling alone needs massive amounts. So the blur would have to be adaptive. And the colouring can promote or reduce aliasing as well. I don't see why escape time would be more critical than DE for aliasing, though. I use Catrom filter for downsampling and add a slight unsharp mask.
Title: Re: SuperSonic Post by: quaz0r on April 30, 2016, 11:30:02 AM applying a lowpass filter prior to downsampling is sort of an axiom of image processing.
Quote from: wikipedia Gaussian blurring is commonly used when reducing the size of an image. When downsampling an image, it is common to apply a low-pass filter to the image prior to resampling. This is to ensure that spurious high-frequency information does not appear in the downsampled image (aliasing). Gaussian blurs have nice properties, such as having no sharp edges, and thus do not introduce ringing into the filtered image. if you imagine something like 256 samples (16x supersampling) combining into one pixel in the final image, you definitely could think of it in terms of information loss. this is simply the nature of the beast. employing some pre-sampling filtering is about controlling how that information will combine to produce the end result. thinking of it in terms of information loss is just not really the right way to think of it. as far as escape time coloring, it simply requires a ton of sampling to get smooth consistent coloring free of spurious garbage, moreso than simple distance shading. i suppose one could post some example comparison shots if they were so inclined. Title: Re: SuperSonic Post by: hapf on April 30, 2016, 11:58:13 AM applying a lowpass filter prior to downsampling is sort of an axiom of image processing. Hm. The low pass filter is required before sampling if the signal to be sampled has frequencies above half the sampling rate so these are removed. Fractals can not be filtered before they are sampled since they don't exist before the software samples them. ;D Once sampled the sampling theorem tells us the samples contain very likely aliasing since fractals have infinite detail. To minimise this supersampling is employed which moves aliasing to higher frequencies. The following downsampling filter is a low pass filter and does what the blurring is supposed to do, but in a better way by preserving more real detail. :dink:Quote if you imagine something like 256 samples (16x supersampling) combining into one pixel in the final image, you definitely could think of it in terms of information loss. this is simply the nature of the beast. employing some pre-sampling filtering is about controlling how that information will combine to produce the end result. thinking of it in terms of information loss is just not really the right way to think of it. That is not my experience. I use continuous escape time colouring and unless I chose bad colour maps with too quickly changing colours there is no special aliasing problem I would not have with DE as well. as far as escape time coloring, it simply requires a ton of sampling to get smooth consistent coloring free of spurious garbage, moreso than simple distance shading. i suppose one could post some example comparison shots if they were so inclined. Title: Re: SuperSonic Post by: quaz0r on April 30, 2016, 12:35:57 PM somehow this
Quote from: hapf I have not used blur so far since blur reduces detail anywhere, needed or not. becomes this Quote from: hapf The low pass filter is required before sampling if the signal to be sampled has frequencies above half the sampling rate so these are removed. Fractals can not be filtered before they are sampled since they don't exist before the software samples them. Once sampled the sampling theorem tells us the samples contain very likely aliasing since fractals have infinite detail. To minimise this supersampling is employed which moves aliasing to higher frequencies. The following downsampling filter is a low pass filter and does what the blurring is supposed to do, but in a better way by preserving more real detail. :snore: it is rather difficult and fruitless to engage in conversation with someone who approaches conversation in such a cagey manner. I was about to respond to some of that until i thought better of it. Quote from: hapf I use continuous escape time colouring and unless I chose bad colour maps with too quickly changing colours there is no special aliasing problem I would not have with DE as well. right, escape time coloring simply introduces more opportunity for aliasing to present, depending on what exactly you do with it. but again, as is becoming apparent with your posts, you knew that already, so im not sure what your game is of making these cagey little comments, attempting to bait people into offering a response i guess... talk about a fruitless endeavor. 88) Title: Re: SuperSonic Post by: hapf on April 30, 2016, 01:06:25 PM somehow this I'm sorry but you wrotebecomes this :snore: it is rather difficult and fruitless to engage in conversation with someone who approaches conversation in such a cagey manner. I was about to respond to some of that until i thought better of it. Quote indeed, proper downsampling involves a blur operation first. I interpreted that as applying some blur filter and then a downsampling filter. Since a filter designed for downsampling has the low pass part already integrated in it I did not see the point of using a separate blur filter beforehand and I don't use that. If that is not what you meant then forget my remarks. I have no intention to be cagey.Title: Re: SuperSonic Post by: billtavis on May 01, 2016, 11:45:15 PM Inspired by this thread, I made some anti-aliasing tests: http://www.fractalforums.com/images-showcase-%28rate-my-fractal%29/anti-aliasing-comparisons-%28super-sampling%29/
Title: Re: SuperSonic Post by: hapf on May 03, 2016, 10:26:33 AM Thanks for the test examples. The result is as expected. When there is excessive aliasing and sufficient oversampling is not feasible/practical then applying additional preblurring before downsampling can improve results. It comes at a price. Additional loss of detail and sharpness in areas with less severe or no aliasing that can't profit from preblurring. That's why I would prefer adaptive pre blurring in such a case. In your case there are only the bent lines to the right that have no aliasing and the massive aliasing on the left. In a fractal the situation is usually more mixed with varying amounts of aliasing all over the place.
Title: Re: SuperSonic Post by: stardust4ever on May 03, 2016, 01:22:20 PM Well this has been interesting. First off, @Hapf, those grayscale renders of deep julia Mandelbrots are breathtaking. Been practicing for the contest, no doubt?
Secondly, I have been toying with Bilinear, Bicubic, and Lancos scaling with GIMP in order to judge the efficacy of filters. When using integer ratios, there isn't much perceived difference between the three, in my opinion. Sometimes I think the fractal grain or noise floor is carried over slghtly more with the Lancos. What I do notice is that the file size is about ~10% less for Bilinear and ~10% more with Lancos, compared to Bicubic as a baseline. Why PNG compression is more efficient with the simpler scalers, I do not know. The bilinear filter supplied with GIMP is not low quality by any means when sticking to integer ratios, such that each pixel on the output samples it's value from an exact square grid of input pixels. I have not yet contemplated the notion of subpixel gassian blurring prior to the downscale and what if any effect it will have. Some areas within the Mandelbrot set as well as the abs fractals are extremely noisy. In Mandel Machine, I have been using 23040x23040 for square renders and 30720x17280 for 16x9 renders for 530 megapixels, just a hair under the hard limit of .5 binary gigapixels. For 4x3, I have been using 25600x19200 (492 megapixels). Generally the 30720x17280 scales 8x8 down to "4k" 3840x2160 or 4x4 down to "8k" or 7680x4320. When I use Kalles Fraktaler, the memory usage is less efficient so it occupies too much of my 16Gb of desktop RAM, sometimes causing write caching as RAM usage is momentarily increased when calculating reference pixels. As a result, I am forced to drop down to "16k" or "24k" resolutions for KF. Not a big deal really as that gives me 6x6 sampling down to "4k". all the junk people post on deviantart for instance, they always add a note in the description instructing you to click on the image to view it full size so you can see all the details. when i see the initial scaled down image i usually think, hey, that looks pretty good. then when i click to view it at full res, my eyes spontaneously combust as my senses are overloaded with all that horrible awful aliasing and distinct LACK of detail. as the blood pours from my eyes i quickly try to escape back to the scaled down image as fast as humanly possible.. :hurt: Not mine, at least not the more recent ones using perturbation, that didn't take weeks in to complete in FX! O0Case in point, my "Electronic Tapestries" render was taken from an extremely noisy and distorted area of the Quasi Burning Ship 3rd, stretched to an extreme 55,000:1 pixel aspect ratio. Feel free to download my extremely clean "4k" render, a direct 8x8 subsample from the 30720x17280 pixel source. This area was extremely noisy, yet the fractal looks undeniably smooth with super AA applied. Originally the target size was 7680x4320, but this size PNG exceeded the upload limit so I used a 3840x2160 size instead. No "Gaussian blur" needed! Deviantart page (1600x900 preview): http://stardust4ever.deviantart.com/art/Electronic-Tapestries-603028636 Direct download of "4k" super AA render: http://orig00.deviantart.net/5f20/f/2016/105/8/a/electronic_tapestries_by_stardust4ever-d9z0zvg.png Title: Re: SuperSonic Post by: billtavis on May 03, 2016, 04:14:49 PM Quote Additional loss of detail and sharpness in areas with less severe or no aliasing that can't profit from preblurring. That's why I would prefer adaptive pre blurring in such a case. If there is a loss of sharpness, then decrease the blur amount. I would personally never go above 0.5x the upres amount, and yeah I agree that my tests are on the soft side. My default is actually only 0.2, so that would mean a blur of 0.8 on image that you are scaling down by 4. A blur that small is barely visible even before scaling down, but it makes a world of difference on the aliasing. Adaptive blurring is an interesting idea... however it comes at a cost. A Gaussian blur that is uniform over the entire image is separable into x and y, and many other tricks can be used for massive speed increases. Most photo editing software makes use of this. A non-uniform blur, however, is not separable, nor can the same tricks be used. AFAIK, you are stuck literally convolving each pixel with a full 2d Gaussian kernel.Quote Case in point, my "Electronic Tapestries" render was taken from an extremely noisy and distorted area of the Quasi Burning Ship 3rd, stretched to an extreme 55,000:1 pixel aspect ratio. Feel free to download my extremely clean "4k" render, a direct 8x8 subsample from the 30720x17280 pixel source. This area was extremely noisy, yet the fractal looks undeniably smooth with super AA applied. Originally the target size was 7680x4320, but this size PNG exceeded the upload limit so I used a 3840x2160 size instead. No "Gaussian blur" needed! I think it looks great! It would be interesting to see if you could get a similar result with a smaller initial render plus a slight pre-blur.Title: Re: SuperSonic Post by: hapf on May 03, 2016, 05:04:32 PM Well this has been interesting. First off, @Hapf, those grayscale renders of deep julia Mandelbrots are breathtaking. Been practicing for the contest, no doubt? Thanks. No practicing, though. Most fractals are a byproduct of program debugging, implementing new features or testing new speed ups etc. Quote Secondly, I have been toying with Bilinear, Bicubic, and Lancos scaling with GIMP in order to judge the efficacy of filters. When using integer ratios, there isn't much perceived difference between the three, in my opinion. Sometimes I think the fractal grain or noise floor is carried over slghtly more with the Lancos. What I do notice is that the file size is about ~10% less for Bilinear and ~10% more with Lancos, compared to Bicubic as a baseline. Why PNG compression is more efficient with the simpler scalers, I do not know. Probably simply the result of the entropy of the different images (e.g. Lanczos preserves more high frequency detail).Quote In Mandel Machine, I have been using 23040x23040 for square renders and 30720x17280 for 16x9 renders for 530 megapixels, just a hair under the hard limit of .5 binary gigapixels. For 4x3, I have been using 25600x19200 (492 megapixels). Generally the 30720x17280 scales 8x8 down to "4k" 3840x2160 or 4x4 down to "8k" or 7680x4320. When I use Kalles Fraktaler, the memory usage is less efficient so it occupies too much of my 16Gb of desktop RAM, sometimes causing write caching as RAM usage is momentarily increased when calculating reference pixels. As a result, I am forced to drop down to "16k" or "24k" resolutions for KF. Not a big deal really as that gives me 6x6 sampling down to "4k". I'm currently limited to 32K because of RAM needs. I have many support data structures for deblobbing etc. that also need quite some memory. But beyond 32K would often be difficult anyway as the number of references needed goes up and deblobbing takes longer and longer.Title: Re: SuperSonic Post by: hapf on May 03, 2016, 05:09:13 PM A non-uniform blur, however, is not separable, nor can the same tricks be used. AFAIK, you are stuck literally convolving each pixel with a full 2d Gaussian kernel.I That would not bother me given that computing the fractal itself in my case often takes several hours or more. For animations with short render times per frame it's more of a concern, though.Title: Re: SuperSonic Post by: lycium on May 04, 2016, 02:21:35 AM Somewhat worrying amount of misinformation in this thread... is anyone here a graphics programmer, or someone who studied computer graphics (esp. signal processing)?
Title: Re: SuperSonic Post by: billtavis on May 04, 2016, 03:25:03 AM Quote is anyone here a graphics programmer, or someone who studied computer graphics (esp. signal processing)? I'm self-taught at graphics programming, however I've worked professionally in 3d animation for years. So I do know what I'm talking about, although I welcome any corrections you might have.To note: in 3d animation (which is why my focus is always on efficiently reaching acceptable results :) ) the way it's done is that a sampling is taken for a given pixel, and if that sample contains information above a certain contrast threshold, more samples are taken up to a user-defined limit. More samples is like it's being rendered larger and scaled down. All sorts of filers can be used, but Gaussian is a good all-around filter. I use Mitchell if there are pesky crawling diagonals. The way I understand it, using the Gaussian filter with super-sampling to decide the value of a single pixel is analogous to giving an entire aliased image a slight blur and scaling it down. Although, in the 3d render, the two steps are one in the same and the samples are not on a perfect grid so that improves quality as well. This is how they do it in Blender: https://www.blender.org/manual/render/blender_render/antialiasing.html With fractal rendering, theoretically adaptive super-sampling could be done the same way and would produce nice results... although this would take quite a bit of programming knowledge to get something going, whereas the technique I advocated in this thread can be implemented by anyone who can download free software. Title: Re: SuperSonic Post by: quaz0r on May 04, 2016, 03:49:41 AM Somewhat worrying amount of misinformation in this thread... is anyone here a graphics programmer, or someone who studied computer graphics (esp. signal processing)? you have all the answers but you arent going to enlighten us? this seems to be a common theme on this site lately.. Title: Re: SuperSonic Post by: xenodreambuie on May 04, 2016, 04:17:09 AM Here are a few observations.
1. Unless I'm mistaken, pre-blur is useful for separating out the envelope and sampling, so you can take one or a small number of samples after the blur. If you're using all available samples in the full size image, it's just convolving one filter with another, so why not use a single one that does what it should, eg Lanczos or Mitchell for good spectral and visual results. 2. For escape-time or similar rendering, it's more efficient to do built-in supersampling for each pixel, and not complicated. Then you don't have the memory limits or the extra work of post-processing. 3. Built-in supersampling is also more flexible for allowing irregular sample positions or adaptive methods. I've found adaptive sampling to work well most of the time but with high density color patterns it doesn't do as well as full supersampling. Title: Re: SuperSonic Post by: lycium on May 04, 2016, 04:35:18 AM Here are a few observations. 1. Unless I'm mistaken, pre-blur is useful for separating out the envelope and sampling, so you can take one or a small number of samples after the blur. If you're using all available samples in the full size image, it's just convolving one filter with another, so why not use a single one that does what it should, eg Lanczos or Mitchell for good spectral and visual results. 2. For escape-time or similar rendering, it's more efficient to do built-in supersampling for each pixel, and not complicated. Then you don't have the memory limits or the extra work of post-processing. 3. Built-in supersampling is also more flexible for allowing irregular sample positions or adaptive methods. I've found adaptive sampling to work well most of the time but with high density color patterns it doesn't do as well as full supersampling. Nailed it :) Title: Re: SuperSonic Post by: lycium on May 04, 2016, 04:49:11 AM you have all the answers but you arent going to enlighten us? this seems to be a common theme on this site lately.. Anytime someone wants to know about that, just ask and I've got a million (totally standard) references; I teach something like 5+ people about rendering a year, one on one mostly. How about something like this, totally free and sitting online since 2004 or so: http://www.pbrt.org/chapters/pbrt_chapter7.pdf To say it differently, it's almost like people should read some books, before giving out advice on a topic as complex as anti-aliasing. Title: Re: SuperSonic Post by: stardust4ever on May 04, 2016, 05:01:57 AM I'm currently limited to 32K because of RAM needs. I have many support data structures for deblobbing etc. that also need quite some memory. But beyond 32K would often be difficult anyway as the number of references needed goes up and deblobbing takes longer and longer. I need your help with deblob settings. I have found a location that I would like to submit (rendered at "32k" but the final result will be scaled to 4k with 8x8 antialias) This image contains a huge amount of infinite spirals and I have found Mandel Machine is cutting off before all the spirals are finished. The net result is many of the smaller spirals have black dots in the center while the larger ones do not, appearing solid gray with antialias. There are thousands of such spirals within the render and I want to eliminate all of the black dots. The average iteration depth in the image is around 3 millions, but I set the high bailout to 100 millions so the centers of the spiral areas will be filled in. I want my submission image to be perfect with no black dots. I don't care if it takes ten thousand references over several days to fill in the holes. I want no black dots visible anywhere in my render and I am sure the 100 million bailout will be sufficient with this regard.If you want I can PM you a sample image but I'd rather not reveal it to the world yet. Title: Re: SuperSonic Post by: hapf on May 04, 2016, 08:36:21 AM Somewhat worrying amount of misinformation in this thread... is anyone here a graphics programmer, or someone who studied computer graphics (esp. signal processing)? What misinformation? And yes, I was involved in image processing and computer graphics when I was at the university.Title: Re: SuperSonic Post by: billtavis on May 04, 2016, 08:44:03 AM Quote 1. Unless I'm mistaken, pre-blur is useful for separating out the envelope and sampling, so you can take one or a small number of samples after the blur. If you're using all available samples in the full size image, it's just convolving one filter with another, so why not use a single one that does what it should, eg Lanczos or Mitchell for good spectral and visual results. If your only means of anti-aliasing is downsizing the image in photo editing software, the pre-blur gives the effect of spreading the sampling outside of the area of the resulting pixel. You can see the tests I did here:http://www.fractalforums.com/images-showcase-%28rate-my-fractal%29/anti-aliasing-comparisons-%28super-sampling%29/ The pre-blur absolutely improved the anti-aliasing, because a blur with sigma 0.5 will extend out farther than a distance of 0.5, and will therefore be influenced by surrounding pixels. Yes, non-uniform adaptive super-sampling is great, but not just anyone can do it. If someone needs to use a photo-editing software to perform their anti-aliasing, I gave them the best way to do that. This link is an excellent guide to the subject: http://therefractedlight.blogspot.com/2010/12/problem-of-resizing-images.html They state "According to the Nyquist theorem, our samples need to be more than double the frequency of the original signal to avoid artifacts, but when we make an image smaller, we greatly increase the frequency of our patterns. So what we need to do is to blur the image first — before downsizing — so that the Nyquist theorem still holds for our final image. In more technical terms, an image needs to be put through a low-pass filter before being down-sampled — the high-frequency components of the image have to be eliminated first by blurring." Quote To say it differently, it's almost like people should read some books, before giving out advice on a topic as complex as anti-aliasing. Well, how about you use your advanced knowledge to actually help us. Like how do we compute the ideal amount of pre-blur when performing anti-aliasing in this manner? As the blog post states "How an image ought to be blurred prior to downsizing is a mathematically complex subject, and certainly the optimal blurring algorithms are not found in Photoshop. But we could experiment with Gaussian Blur, although choosing the Gaussian radius may be a bit problematic."Title: Re: SuperSonic Post by: hapf on May 04, 2016, 08:52:22 AM Here are a few observations. Hence my remark that the blurring is built into the downsampling filter when it does what it's supposed to do.1. Unless I'm mistaken, pre-blur is useful for separating out the envelope and sampling, so you can take one or a small number of samples after the blur. If you're using all available samples in the full size image, it's just convolving one filter with another, so why not use a single one that does what it should, eg Lanczos or Mitchell for good spectral and visual results. Quote 2. For escape-time or similar rendering, it's more efficient to do built-in supersampling for each pixel, and not complicated. Then you don't have the memory limits or the extra work of post-processing. I have not looked into individual pixel adaptive supersampling yet. It would help with memory but not provide an image set of different resolutions unless one repeats it at different resolutions. And for deblobbing it creates issues, I would think.Title: Re: SuperSonic Post by: hapf on May 04, 2016, 09:16:42 AM They state "According to the Nyquist theorem, our samples need to be more than double the frequency of the original signal to avoid artifacts, but when we make an image smaller, we greatly increase the frequency of our patterns. So what we need to do is to blur the image first — before downsizing — so that the Nyquist theorem still holds for our final image. In more technical terms, an image needs to be put through a low-pass filter before being down-sampled — the high-frequency components of the image have to be eliminated first by blurring. Yes. correct. A downsampling filter has blurring built into it. But the usual ones are designed for "normal" images, not fractals with excessive aliasing. So additional pre blurring is an option when using a standard downsampling filter.Title: Re: SuperSonic Post by: stardust4ever on May 04, 2016, 09:34:17 AM If your only means of anti-aliasing is downsizing the image in photo editing software, the pre-blur gives the effect of spreading the sampling outside of the area of the resulting pixel. You can see the tests I did here: You are comparing apples to oranges. When audio sampling from analog source, a typical non-audiophile ADC will hold the current instantaneous value of the waveform and record it as a numerical value. This transforms the analog signal into a stair step. Any frequency above half the sample rate will develop artifacts expressed as off key notes below half the sample rate. It is absolutely necessary to use a low pass filter tuned to half the sample frequency to completely eliminate artifacts. For instance, an ultrasonic note, say 40kHz fed directly into an ADC operating at 44.1kHz will produce a very annoying audible moire pattern at 4.1KHz. So it is absolutely necessary to install a low pass filter on the analog audio input, typically 20kHz, so that no artifacts are present in the 44.1kHz recording.http://www.fractalforums.com/images-showcase-%28rate-my-fractal%29/anti-aliasing-comparisons-%28super-sampling%29/ The pre-blur absolutely improved the anti-aliasing, because a blur with sigma 0.5 will extend out farther than a distance of 0.5, and will therefore be influenced by surrounding pixels. Yes, non-uniform adaptive super-sampling is great, but not just anyone can do it. If someone needs to use a photo-editing software to perform their anti-aliasing, I gave them the best way to do that. This link is an excellent guide to the subject: http://therefractedlight.blogspot.com/2010/12/problem-of-resizing-images.html They state "According to the Nyquist theorem, our samples need to be more than double the frequency of the original signal to avoid artifacts, but when we make an image smaller, we greatly increase the frequency of our patterns. So what we need to do is to blur the image first — before downsizing — so that the Nyquist theorem still holds for our final image. In more technical terms, an image needs to be put through a low-pass filter before being down-sampled — the high-frequency components of the image have to be eliminated first by blurring."Well, how about you use your advanced knowledge to actually help us. Like how do we compute the ideal amount of pre-blur when performing anti-aliasing in this manner? As the blog post states "How an image ought to be blurred prior to downsizing is a mathematically complex subject, and certainly the optimal blurring algorithms are not found in Photoshop. But we could experiment with Gaussian Blur, although choosing the Gaussian radius may be a bit problematic." The digital equivalent would be using nearest neighbor to downsample an image. This is essentially taking the upper left most input pixel for each output pixel, and assigning this input pixel to the output. A better audio analogy to what we are doing in digital image domain would be to capture audio masters at a very high sample rate, say 192kHz 24bit, then apply any post processing effects to the recording and scale the resultant waveform down to 44.1kHz 16-bit for mastering audio CDs or mp3 downloads for public consumption. Gaussian blur is essentially a low pass 2D filter for digital images, but is IMO unnecessary for renders. Any noise or moire pattern that exists after the source image is downsampled by a factor of 2, 3, 4, 6, 8 or so on would likely not benefit much from subpixel blurring because said artifacts are bigger than the output pixels. Suppose each output pixel sources it's color from a 4x4 grid of input pixels. Using Bilinear filter, each one of the 16 sub pixels gets equal influence to the output pixel. If a contrasting shape occupies a portion of the output pixel, the output pixel is weighted colored based on the proportion of sub pixels withing the shaded area. Apply a guassian blur of say radius 2 beforehand, and now these bordering sub pixels near the boundaries have varying influence on adjacent output pixels. This only serves to soften the image, but again does nothing to preserve detail. If it is important that boundary sub pixels influence the resulting output pixels, then advanced scaling techniques like Bicubic or Lancos are used. This is important when scaling to non-integer ratios, but I have zoomed into images integer scaled with Bilinear, Bicubic, and Lancos and failed to notice an appreciable difference between samples when viewing pixels zoomed in 400%. However PNG compression in GIMP seems to have slightly higher compression efficacy when using Bilinear. Title: Re: SuperSonic Post by: quaz0r on May 04, 2016, 10:13:49 AM Quote from: billtavis Well, how about you use your advanced knowledge to actually help us. meh, i wouldnt expect too much on this front. ive observed this individual's interactions here before; hes more interested in trolling than making constructive contributions. Title: Re: SuperSonic Post by: xenodreambuie on May 04, 2016, 10:48:54 AM I have not looked into individual pixel adaptive supersampling yet. It would help with memory but not provide an image set of different resolutions unless one repeats it at different resolutions. And for deblobbing it creates issues, I would think. The adaptive part is optional, since I don't believe it can be made perfect so you'd always need to choose when to use it. If you want different resolutions, you could render at the largest needed and downsize that for smaller images, if it takes too long to render a much smaller one separately. I haven't looked into the detail of implementing perturbation since I'm more interested in more general formulas, but if it needs much caching of details between pixels that might complicate implementation, or if you have to revisit pixels. I was assuming that it's feasible to do all the supersamples for a pixel and filter them before moving to the next pixel. Title: Re: SuperSonic Post by: lycium on May 04, 2016, 03:16:56 PM meh, i wouldnt expect too much on this front. ive observed this individual's interactions here before; hes more interested in trolling than making constructive contributions. That's hilarious mate... maybe have a look through my posts on this forum? I've been discussing antialiasing here since 2006 or something. I linked to a very very very good free chapter of PBRT, which no one seems to have looked at. Too bad.Here's another link you guys can ignore: http://www.realtimerendering.com/blog/principles-of-digital-image-synthesis-now-free-for-download/ Again, this stuff is standard. I guess I'm not allowed to point out when misinformation is being shared and cite standard references, unless I write a little tutorial together with my post? *sigh* Title: Re: SuperSonic Post by: billtavis on May 04, 2016, 05:48:31 PM Quote I linked to a very very very good free chapter of PBRT, which no one seems to have looked at. Too bad. I looked through your reference. While it does not discuss anti-aliasing via scaling down images, it clearly states, "Another approach to eliminating aliasing that sampling theory offers is to filter (i.e., blur) the original function so that no high frequencies remain that can’t be captured accurately at the sampling rate being used."Yup. But still there remains the question of how to compute the ideal blur amount? Perhaps it's one of those things that must always be tweaked depending upon the image. Quote Gaussian blur is essentially a low pass 2D filter for digital images, but is IMO unnecessary for renders. Well, the results speak for themselves, both in my example thread and in the blog post I linked to. You can go on without it if you choose.Quote Yes. correct. A downsampling filter has blurring built into it. Blurring may be "built-in" somewhat, but the results are clearly improved by doing an additional pre-blur, even if it is very small.:educated: Here is an excellent academic reference that actually applies to the topic at hand (scaling down an image). I also took the time to quote the relevant passage (emphasis mine): https://web.cs.wpi.edu/~matt/courses/cs563/talks/antialiasing/methods.html Quote Supersampling is basically a three stage process.
Title: Re: SuperSonic Post by: quaz0r on May 04, 2016, 05:57:39 PM lycium, yes you did post some interesting and informative links. contrary to ignoring them, that is exactly what we want, information. thank you for that. no thanks for your caustic manner and horrible attitude however. maybe in 2006 you added to conversations in mature, respectful, and helpful ways. you are right im not familiar with what you may have posted then. what ive seen of you in my time here however has been 100% what you are displaying now: heavy on the trolling, light on anything else.
Title: Re: SuperSonic Post by: lycium on May 04, 2016, 06:24:15 PM Remind me, who is the one making things personal in this thread? Was it really me? Are you really so "objective" that you can't see past my links and literally non-stop quest to teach absolutely everyone who'll listen about CG, just because you're somehow offended I dared to say there's misinformation in this thread, without a complete mini-tutorial? Just last week I taught 5-6 people how to program IFS renderers: http://www.meetup.com/spektrum/events/230378312/?gj=co2&rv=co2
Seriously, point that finger and 4 point back at you. I have the security of many people who actually know me and have benefited from my ridiculous desire to teach almost everything I know, and besides this are able to educate themselves (instead of blaming others for their ignorance in the face of the amazing resources we have these days on the internet). If you would simply change your attitude and say instead "hey lycium I've looked at this stuff you've linked and XYZ is unclear", you'd suddenly see a very different side of me. That's the last of this personal nonsense from me. Hopefully someone gets something out of the Principles of Digital Image Synthesis book link in particular, being able to borrow that book from the university library was worth the tuition fee for me alone. Title: Re: SuperSonic Post by: quaz0r on May 04, 2016, 08:49:35 PM now that lycium has finished enriching our lives with his contributions, i look forward to any productive continuation of this discussion.
Quote from: billtavis I looked through your reference. While it does not discuss anti-aliasing via scaling down images, it clearly states, "Another approach to eliminating aliasing that sampling theory offers is to filter (i.e., blur) the original function so that no high frequencies remain that can’t be captured accurately at the sampling rate being used." Yup. But still there remains the question of how to compute the ideal blur amount? Perhaps it's one of those things that must always be tweaked depending upon the image. basically it seems like all of this is a rather complex subject without a simple, definitive answer. i currently use imagemagick (for better or worse) as my image library, so i was having another look at these pages, http://www.imagemagick.org/Usage/filter/ http://www.imagemagick.org/Usage/filter/nicolas/ which indeed refreshes my feeling of "this is a rather complex subject" as opposed to "there is a simple, definitive answer." Quote from: billtavis Quote Gaussian blur is essentially a low pass 2D filter for digital images, but is IMO unnecessary for renders. Well, the results speak for themselves, both in my example thread and in the blog post I linked to. You can go on without it if you choose. Quote Yes. correct. A downsampling filter has blurring built into it. Blurring may be "built-in" somewhat, but the results are clearly improved by doing an additional pre-blur, even if it is very small. it seems like the proper course of action here would be to adjust the settings of the built-in blur directly if need be. i dont recall actually seeing this functionality typically, though it looks like maybe imagemagick has it. and maybe i missed it, but ive not seen it mentioned if resampling filters tend to adjust the blurring based on the original resolution and the target resolution, or if they use static defaults that you indeed should adjust manually? even if they do adjust automatically, you are right, the case still remains that whatever they are or are not doing automatically does not always produce the desired result. Quote from: billtavis Here is an excellent academic reference that actually applies to the topic at hand (scaling down an image). Quote Supersampling is basically a three stage process. A continuous image I(x,y) is sampled at n times the final resolution. The image is calculated at n times the frame resolution. This is a virtual image. The virtual image is then lowpass filtered The filtered image is then resampled at the final frame resolution. indeed, while some folks assert that no lowpass filter should be involved, then maybe turn around and state that a lowpass filter should be involved but is already built in, or perhaps make allusions to some as yet unspecified universally-known and readily-available definitive answers to the topic at hand, when you actually do go searching for this information and discussions on the matter, what you find tends to be: a) lots of information and discussions like what we both have referenced here, indicating, correctly or incorrectly, not only involvement of a lowpass filter in the process of resampling, but explicitly referencing the manual application of a lowpass filter prior to application of the resampling filter. b) a lack of any clear, definitive answers of the sort actually being sought and while we have plenty of egos here that apparently possess all the answers, these answers tend to be unforthcoming and conflicting, both with each other and with information found elsewhere. Title: Re: SuperSonic Post by: Chillheimer on May 05, 2016, 10:07:58 AM To say it differently, it's almost like people should read some books, before giving out advice on a topic as complex as anti-aliasing. people ask for help and advice here and share their experience. you could just contribute and help in a friendly manner. there's no need to be snobbish. you set the tone - expect the answer to have the same tone. this opnly leads to escalation. maybe have a look through my posts on this forum? I've been discussing antialiasing here since 2006 or something. I linked to a very very very good free chapter of PBRT, which no one seems to have looked at. Too bad. what's your problem?Here's another link you guys can ignore: http://www.realtimerendering.com/blog/principles-of-digital-image-synthesis-now-free-for-download/ Again, this stuff is standard. I guess I'm not allowed to point out when misinformation is being shared and cite standard references, unless I write a little tutorial together with my post? *sigh* everyone is supposed to know every link of every topic online?! remember all posts of master-teacher lycium since 2006?! just help, or stay out of the thread. Title: Re: SuperSonic Post by: stardust4ever on May 06, 2016, 04:23:04 AM Get a room you people, geeze...
One thing that popped into my head the other night was that raytracing software often employs random sampling during antialias so that each ray occupies a randomized position in the sub pixel. For example, a 4x4 grid is used to subsample each output pixel. Supposed a scene has a floor which is a chessboard tile pattern of black and white squares. Without antialias, every pxel is either white or black. These black and white pixels create very complex moire patterns as the tile floor escapes to to the vanishing point on the horizon. Suppose the target is 1920x1080 but you want to eliminate these noise and moire patterns by simply rendering big and downscaling. So you render the scene at 7680x4320 and downsample the image using 4x4 downsampling. But even at 7680x4320 resolution, the perfectly aligned pixel grid and the chessboard floor in the scene create interferance patterns, such that zoomed in, certain pixel clusters are more likely to line up with the black or white tiles. In the event that said patterns have a larger period than one pixel of the antialiased output, these patterns will be beeasily visible, creating strange curves and other shadowy shapes that should not exist in the image. Because of the phenomenon that rendered scenes often have repeating textures, raytracing software often employs random sampling of AA subpixels. Again assuming a grid of 4x4 sub-pixels is used for antialias, instead of rendering a rigid grid such that each sub-pixel has even spacing, the each pixel is divided into sixteen (or more) squares. The actual ray computed is assigned a random location within each square. As a result, each pixel has a semi-random sampling that cancels out any possible recurring moire patterns. As a result, a fine grain effect occurs within noisy areas of the image, but said grain contains no recurring moire patterns. Thus, instead of seeing shapes that should not exist within the image, a much more aesthetically pleasing grain effect exists instead. Most raytracing suites employ random sampling for antialias, yet no fractal rendering software that I know of does this. All fractal programs I am aware of render to a perfectly square grid. This would eliminate most moire patterns in areas of fractal detail which is highly repetitive, and replace it with soft film-like grain. Low pass filters are only useful in the analog domain which has a near infinite pool of samples, but not in the digital domain after said conversion is made. When converting from analog to digital, offsetting the focus in a camera just so that the focal locus is equal to the spacing between the CCD cells, or a low pass filter at half the sample rate for recording audio, prior to sampling, to eliminate the presense of audible moire patterns in sound, makes sense. Fractal rendering and raytracing is a purely digital domain, so there is no benefit to low passing the data because the sample pool is notarbitrarily large. Random sampling of sub-pixels as done in most raytracing suiteswould bea far more productive better strategy for eliminating moire in any computer generated images, including fractals. EDIT: I have attached some sample renders made in Bryce 7.1. The first image is an 800x600 sample scene (mirror ball and chessboard floor) with no anti-aliasing applied. The second image is an oversampled render at 3200x2400 with no antialias and scaled down to 800x600 using Bilinear scaling in GIMP. The third image used the built in 4x4 Antialiasing preset using a random sampling algorithm. All were saved as Jpeg 85% (4.2.0) I will let the images speak for themselves. Title: Re: SuperSonic Post by: xenodreambuie on May 06, 2016, 05:26:31 AM Stardust4ever, that's one of the reasons I mentioned the possibility of irregular sampling as an advantage of built in supersampling. It's most useful for highly correlated patterns such as texturing, not nearly as much for typical Julia or Mandelbrot fractal boundaries.
Jux does use irregular sampling, but not randomized. Reducing correlation was one reason, but not the main one. I figured that rather than weighting samples with a typical envelope, I could use equal weights and have more samples nearer the center. It also fit better with my adaptive sampling method. Title: Re: SuperSonic Post by: stardust4ever on May 06, 2016, 05:40:54 AM Some fractal areas do have higher moire artifacting than others. One example is a typical textbook zoom into the utter west with classic coloring algorithm (bailout = 2, solid bands). You get rays like sunshine emenating from the minibrot and at deper levels the straight alternating lines can develop high moire patterns when they get close together.
But yeas, most of the time, as with spirals, there isn't a huge correlation between iteration bands and pixel spacing. I just illustraited a point that by dividing each pixel into squares and randomizing the position oof each ray within those squares, can often dissolve any existing moire patterns within the render. In some cases this technique can increase the noise floor slightly while simultaneously eliminating moire patterns. Overall it is a better strategy for CGI, and obviously the chessboard floor is an example of worst case senario, as is for instance an utter west minibrot found in the needle. Title: Re: SuperSonic Post by: hapf on May 06, 2016, 09:23:00 AM Here is a location full of aliasing/moiré if someone wants to compare pre-blurring, random sampling etc.:
-1.99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999964167859181481345304304588435620123924712119404059472773663805166711618956194838719021305389817240974903756614325041160459611662529365729385477093320702251209652829250829074940430728561432521051533584903938105170934081265592965605107688074973344719120419087715612046156524154644929702743341067174021056910317166589961218248362834949985270221280861548508395310334265921266581204679762344022201818885889867610620291231229022045666418693877777391264452722884510347414158873279576785099945683580403782743643104542152419918540813636043912330163841997917973325798404572075914481612699848039954264440595014192571321738904445996203786169930590801361474273553935130353066785498224246333944685895946870176882796651097186228949653941076075485441461103924579943074972981437962770006625765561856033607214522789938762970286156848721078931644371182974084380216600975305547447659897856166223043566284290858157706280339989734686722402220241439675239592062243609300094148978658996262967206023433742596520505346569875598295276288495756263419388622771768891979873193484627822919147573239686269089637976355644888974656138132658727701587547968611026807208268286565747798745776245935950691344651196405469801258583280873101185756290562580827749669979154174025231381583264353302320086682984758314496737546924918059686608834116362559360550433978782522508996848941145361387302148531011E+00 9.278349033002458222752882264118666506875868709042677290921147635710226552986776267212972091137484955490233866201195796 93670957062878697794559936583876208545770666583842331178385162354516281658116399393227322870407473800927719286972960266 31129288281459169518903587001677171979857730055602950778111853690840011452209413262658241443313917883503401931943325096 06579933129150121248596429225101955721357608105375970821784642376085028539375830433056099153272633370038794025595330919 88852625768552875794275655544991337875679697934349837698927655790623531598714273352479335154288990048913432228893572260 29913710935272545104585421564984984366719039752779318889316924965554320688680241080128122333333996601566180864998916482 17111728256116087626153639684817172308252408318283638912210170996414795992126755704349638773675742338565033151018914177 79569643667354537702653924583791423600634628258017164647940234824678789454282588488123546775833149638419112121876080440 47362920131312450597968549249878552101364391390363892503699853716669716234989293403499762660626533904766166840534661200 45776300135831230295373948219997645688450354583264943406320293477066866090160710200612667953202705520730279334613987484 91563483289315306136219965102789068810546602937572616608094572384550268242352906577644145492335247298026435439040408810 75615145905459787533604259410301032631826489567215587226336519639696438437062823347278091245633199124506195068690131103 9449570418976896401626670235727779272578761089464945E-242 3.0E-1469 Title: Re: SuperSonic Post by: stardust4ever on May 06, 2016, 09:53:12 PM Here is a location full of aliasing/moiré if someone wants to compare pre-blurring, random sampling etc.: Utter west, aye?-1.99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999964167859181481345304304588435620123924712119404059472773663805166711618956194838719021305389817240974903756614325041160459611662529365729385477093320702251209652829250829074940430728561432521051533584903938105170934081265592965605107688074973344719120419087715612046156524154644929702743341067174021056910317166589961218248362834949985270221280861548508395310334265921266581204679762344022201818885889867610620291231229022045666418693877777391264452722884510347414158873279576785099945683580403782743643104542152419918540813636043912330163841997917973325798404572075914481612699848039954264440595014192571321738904445996203786169930590801361474273553935130353066785498224246333944685895946870176882796651097186228949653941076075485441461103924579943074972981437962770006625765561856033607214522789938762970286156848721078931644371182974084380216600975305547447659897856166223043566284290858157706280339989734686722402220241439675239592062243609300094148978658996262967206023433742596520505346569875598295276288495756263419388622771768891979873193484627822919147573239686269089637976355644888974656138132658727701587547968611026807208268286565747798745776245935950691344651196405469801258583280873101185756290562580827749669979154174025231381583264353302320086682984758314496737546924918059686608834116362559360550433978782522508996848941145361387302148531011E+00 9.278349033002458222752882264118666506875868709042677290921147635710226552986776267212972091137484955490233866201195796 93670957062878697794559936583876208545770666583842331178385162354516281658116399393227322870407473800927719286972960266 31129288281459169518903587001677171979857730055602950778111853690840011452209413262658241443313917883503401931943325096 06579933129150121248596429225101955721357608105375970821784642376085028539375830433056099153272633370038794025595330919 88852625768552875794275655544991337875679697934349837698927655790623531598714273352479335154288990048913432228893572260 29913710935272545104585421564984984366719039752779318889316924965554320688680241080128122333333996601566180864998916482 17111728256116087626153639684817172308252408318283638912210170996414795992126755704349638773675742338565033151018914177 79569643667354537702653924583791423600634628258017164647940234824678789454282588488123546775833149638419112121876080440 47362920131312450597968549249878552101364391390363892503699853716669716234989293403499762660626533904766166840534661200 45776300135831230295373948219997645688450354583264943406320293477066866090160710200612667953202705520730279334613987484 91563483289315306136219965102789068810546602937572616608094572384550268242352906577644145492335247298026435439040408810 75615145905459787533604259410301032631826489567215587226336519639696438437062823347278091245633199124506195068690131103 9449570418976896401626670235727779272578761089464945E-242 3.0E-1469 Would you mind expressing those coordinates without scientific notation, ie 0.000000000..... I had to add 241 zeros after the decimal. Mandel Machine can't read inputs in scientific notation with the "E" on the end. EDIT: Here's some fixed coordinates: Code: Real= -1.99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999964167859181481345304304588435620123924712119404059472773663805166711618956194838719021305389817240974903756614325041160459611662529365729385477093320702251209652829250829074940430728561432521051533584903938105170934081265592965605107688074973344719120419087715612046156524154644929702743341067174021056910317166589961218248362834949985270221280861548508395310334265921266581204679762344022201818885889867610620291231229022045666418693877777391264452722884510347414158873279576785099945683580403782743643104542152419918540813636043912330163841997917973325798404572075914481612699848039954264440595014192571321738904445996203786169930590801361474273553935130353066785498224246333944685895946870176882796651097186228949653941076075485441461103924579943074972981437962770006625765561856033607214522789938762970286156848721078931644371182974084380216600975305547447659897856166223043566284290858157706280339989734686722402220241439675239592062243609300094148978658996262967206023433742596520505346569875598295276288495756263419388622771768891979873193484627822919147573239686269089637976355644888974656138132658727701587547968611026807208268286565747798745776245935950691344651196405469801258583280873101185756290562580827749669979154174025231381583264353302320086682984758314496737546924918059686608834116362559360550433978782522508996848941145361387302148531011 Playing with the color cycling a bit, I'm getting star shaped moire patterns throughout the image. Circles and rays are especially susceptible to alias patterns. Title: Re: SuperSonic Post by: hapf on May 07, 2016, 09:41:07 AM Playing with the color cycling a bit, I'm getting star shaped moire patterns throughout the image. Circles and rays are especially susceptible to alias patterns. My God! It's full of stars!;D It's also a region that is tough for the Newton algorithm. Title: Re: SuperSonic Post by: stardust4ever on May 07, 2016, 10:15:46 AM My God! It's full of stars! Kinda like my reverse "Turbo Zoom" video.;D It's also a region that is tough for the Newton algorithm. https://www.youtube.com/watch?v=mx_UXxtW3sg An otherwise boring textbook zoom into the utter west such that the bailout increases by exactly 228 each period, also the number of colors in the FX color pallet. When a bailout of exactly 2 is used, the bars formed by the color bands are solid black. I had to turn off pixel guessing when rendering this movie because the moire patterns created holes between the rays. I ran the movie backwards because the viewer would fall asleep watching the first half otherwise. Title: Re: SuperSonic Post by: Kalles Fraktaler on May 07, 2016, 11:47:47 PM Would you mind expressing those coordinates without scientific notation, ie 0.000000000..... KF can...Title: Re: SuperSonic Post by: stardust4ever on May 08, 2016, 02:13:28 AM KF can... Hehe, I manually added the zeros in notepad, just a pain to group them in tens to make sure I counted correctly. I like to jump back and forth between Mandel Machine and Kalles Fraktaler. Both are extremely useful software. Mandel Machine is faster for extremely deep pure Mandelbrot rendering, but KF have more bells and whistles and can handle a whole slew of new formulas!Sadly it seems Botond is no longer maintaining MM. He hasn't logged in months it appears. I sent him a PM about a location I found was not solving all glitches, but I seemed to discover a solution on my own. Changing the references on Series Approximation from 17 down to 5 fixed the issue I was having with "holes" in the image, but took about 4 times longer. It was worth the tradeoff. |