Title: What type to use Post by: kronikel on December 04, 2010, 06:53:41 AM I finally took the time to learn c# and after about 2 days I have a working version of my fractal drawer rewritten in c++
I use the type "double" for my x and y variables I found that after zooming in for a little bit the numbers are too long to be held by a double. I converted everything to the type "decimal" but then rendering took about 50x longer... Is there a type that would work better than double? Is it normal for using decimal to slow it down that much? Title: Re: What type to use Post by: hobold on December 04, 2010, 01:03:24 PM Double precision floating point has top notch hardware support these days. Other types can beat it only in rare special cases. Going to higher precision involves data types implemented in software, so there will be a very noticeable performance overhead.
Decimal might not be the best choice of type. I suspect there is some inefficiency when a binary machine emulates decimal math. I don't know what your programming language of choice supports directly, but you do have a few other options. Bignum (AKA arbitrary precision integers) arithmetic can be used to do fixed point calculations. That relies mostly on the hardware's integer multiply support, which is generally good these days. It is possible to use a pair of double precision floating point values to represent one higher precision number. This can reap the speed benefits of the underlying hardware, but still has a bit of software overhead. Title: Re: What type to use Post by: panzerboy on December 04, 2010, 01:15:02 PM For the Mandelbrot or Julia formula's what you need are large fixed point decimals. As I type I'm calculating a mandelbrot down at 198 zooms that uses 256bit arithmetic, almost certainly fixed point not floating point. This link says you can do 64 bit integers in c#
http://msdn.microsoft.com/en-us/library/exx3b86w.aspx Oh I've found the page for decimal type http://msdn.microsoft.com/en-us/library/364x0z75.aspx Interesting, that page says decimal is 128 bit integer. I would have thought a 'decimal' would be a true BCD encoded type. The 86 intruction set still has (I believe) instructions to do math on Binary coded decimal values. This encodes up to two decimal numbers in each 4 bit 'nibble' of a byte, essentially convert the char to hex & value 0-9 mean 0-9 in hex and the higher codes A-F stand for negative sign and decimal point. This stuff is all related to COBOL and a hold over from the earliest days when Intel thought manufacturers would build minicomputers from their processors and run COBOL. The advantage of BCD for accounting is you never have bicimal problems with numbers like 0.2 not being able to be represented in binary (0.001100110011 ...). The tricky thing with doing multiple-word arithmetic is the carry or in this case borrow. If I multiple two 32 bit numbers the result is a 64bit number. If you need 96bit of precision your back to 32 bit multiplications because the biggest register size to store the result is 64 bit. So think of two 96 bit values A & B, you need to split them into 32 bit chunks AH, AM, AL (High, Middle, Low) and BH, BM, BL. Your still doing a 64 bit multiplication because you want to access the high & low 32 bits of the result. AH will have the high 32 bits set to 0. the equation is a bit like AH * BH gives RH, RM AH * BM give SM, SL AH * BL gives TL (discard lower 32 bit result its beyond our 96 bits of precision) AM * BH gives UM, UL AM * BM gives VL AL * BH gives WL XL=WL+VL XM=carry from above XL+=UL XM+=carry XL+=TL XM+=carry XL+=SL XM+=carry XM+=UM XH=carry bit XM+=SM XH+=carry XM+=RM XH+=carry XH+=RH Thats 6 multiplications and 13 additions and 2 carries that you now have to do just because you added another 32 bits to the precision. I'll leave it as an execercise for you to figures out how many more multiplications & additions/carries for another 32 bits to get to decimal types 128 bit. If your running 32 bit windows I dont think the 64 bit interger multiplcation so you have to break the numbers down into 16 bit chunks. Clever programs use the SIMD instructions to do several multiplications at once, the SIMd can do 2 64 bit of 4 32 bit multiplications at once. Dont know how they deal with the carries though. For my own amusement I've worked out the steps for 128 bit a1 a2 a3 a4 b1 b2 b3 b4 a1 * b1 = c1, c2 a1 * b2 = d2, d3 a1 * b3 = e3, e4 a1 * b4 = f4 a2 * b1 = g2, g3 a2 * b2 = h3, h4 a2 * b3 = i4 a3 * b1 = j3, j4 a3 * b2 = k4 a4 * b1 = l4 x4 = e4 + f4 x3 = carry x4+=h4 x3+=carry x4+=i4 x3+=carry x4+=j4 x3+=carry x4+=k4 x3+=carry x4+=l4 x3+=carry x3+=d3 x2=carry x3+=e3 x2+=carry x3+=g3 x2+=carry x3+=h3 x2+=carry x3+=j3 x2+=carry x2+=c2 x1=carry x2+=d2 x1+=carry x2+=g2 x1+=carry x1+=c1 10 multiplications, 25 additions and 3 carries, ouch! Title: Re: What type to use Post by: kronikel on December 04, 2010, 07:31:50 PM Well I didn't understand any bit of that
I need somewhere I can learn what is meant by things like precision, bit, byte, 64 vs 32, and all that kind of stuff. I was just looking for a quick type switch :tongue1: Title: Re: What type to use Post by: sonofthort on December 04, 2010, 07:57:25 PM Other compilers (such as gcc I believe) implement an 80 bit float and even 128 bit float. What all these bits mean is that the type will have higher precision and let you zoom further. You will want to look into "Arbitrary Precision" as these are special types designed for this kind of thing (as already said, BigNUM is an example of this). Also, since speed will be severally impacted by using a special type, I would suggest using double *until* you are zoomed in far enough that double is no longer precise enough, then switch over to the more precise type. You might want to look into template classes and functions to achieve this.
Title: Re: What type to use Post by: panzerboy on December 04, 2010, 11:34:47 PM If you don't understand, here is a good starting place. http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic Note there are a couple of possible libraries to do arbitary precision arithmetic in c#. There is more choice for c/c++. c# is implemented as a run-time interpretor, similar to Java. I would think you'd get better performance from c++ or c. |