Logo by mario837 - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Check out the originating "3d Mandelbulb" thread here
 
*
Welcome, Guest. Please login or register. April 19, 2024, 11:29:10 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1]   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: squaring benchmark  (Read 4802 times)
Description: there's more than one way to do it
0 Members and 1 Guest are viewing this topic.
claude
Fractal Bachius
*
Posts: 563



WWW
« on: April 10, 2016, 09:07:43 PM »

I benchmarked some implementations of complex number squaring:
https://mathr.co.uk/blog/2016-04-10_complex_squaring.html

Kalles Fraktaler's way (3 real squares) is faster than Karatsuba (2 real multiplies) only for MPFR precision between 3000-100000.  The naive method (2 squares, 1 multiplcation) is never the best.
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #1 on: April 10, 2016, 09:40:13 PM »

this sounds like a fundamental performance thing i was mostly unaware of  undecided   whenever i square something i just do x*x ... but there is a magic square function that implements some kind of math hax to make it faster?  how about for hardware floating point, is there a magic function for that also or is that just x*x ?

it looks like the c++ wrapper im using for mpfr calls mpfr_sqr for its sqr function.  you mention mpc_sqr in your article.  is it the same thing?

another thing i was wondering about was multiplying by 2.  i was noticing my code doing x+x and thinking to myself, hmm, there ought to be a mpfr function for multiplying by 2 by just adjusting the exponent or something?
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #2 on: April 10, 2016, 09:52:30 PM »

oh i just realized you were talking about complex squaring  cheesy   but the fact that there is a mpfr_sqr function still makes me wonder if there is still some special implementation for squaring real numbers?

ok looking at your article again i realized why my mind immediately got drawn from complex squaring to squaring real numbers... you list the individual operations of squaring and multiplying real numbers separately so it does seem like we are considering them different..
Logged
claude
Fractal Bachius
*
Posts: 563



WWW
« Reply #3 on: April 10, 2016, 10:02:24 PM »

for hardware floating point I use x*x too, but for arbitrary precision a special squaring function can be a fair bit faster than a regular multiply.  arbitrary precision libraries use a vector of limbs, each small enough that hardware operations can be used for them.  here's a multiplication with 3 limbs (considered as polynomial in the base):

(a + bx + cx^2)(d + ex + fx^2) = ad + (ae + bd) x + (be + af + cd) x^2 + O(x^3)

now for squaring, a = d, b = e, c = f so the right hand side can be simplified:
(a + bx + cx^2)^2 = a^2 + 2ab x + (b^2 + 2ac) x^2 + O(x^3)

here a,b,c are typically 64bit words (sometimes 32bit, it depends), and this doesn't take into account the carrying/normalization that needs doing to get it all to fit together properly but you can already see that this multiply takes 6 word-multiplies and 3 word-adds while the squaring takes only 4 word-multiplies, 1 word-add and 2 shifts

in actualilty this grade-school algorithm probably isn't used by MPFR and other high-quality arbitrary precision libraries except at low precisions, there are asymptotically faster ones like Karatsuba and Toom-Cook and even FFT-based methods for when you need to multiply huge numbers.

MPC is a C-wrapper for complex numbers around MPFR for real numbers.  mpc_sqr squares a complex number, mpfr_sqr squares a real number.  mpfr_mul_2si multiplies by a power of two very cheaply
Logged
Kalles Fraktaler
Fractal Senior
******
Posts: 1458



kallesfraktaler
WWW
« Reply #4 on: April 10, 2016, 11:08:48 PM »

claude, so you did all calculations in mpr right?
Does that consider the optimizations one can do when calculating Mandelbrot?
That is, when using an array of hardware datatypes for arbitrary precision of say length 10, multiplying would involve 10x10=100 operations, but you can ignore half of them sinne the output precision is the same as the imput. And when squaring, you can ignore 75% of them since half of the 50% is the same?
I hope you understand what I mean. This is well described by the creator of fractal extreme (Bruce), I can find it for you if you want?
I assume the gain of only squaring is even bigger considering this optimization...
Logged

Want to create DEEP Mandelbrot fractals 100 times faster than the commercial programs, for FREE? One hour or one minute? Three months or one day? Try Kalles Fraktaler http://www.chillheimer.de/kallesfraktaler
http://www.facebook.com/kallesfraktaler
claude
Fractal Bachius
*
Posts: 563



WWW
« Reply #5 on: April 10, 2016, 11:51:32 PM »

Yes I used MPFR for all arbitrary precision calculation in the benchmark.  MPFR has lots of optimisations built in, like the ones you mention.  My benchmark only compares some different usages of MPFR to do complex squaring (MPFR is only for real numbers).  I could have been clearer in my post, the reason I counted squares and multiplications separately is because of the MPFR-provided optimized functions (where square(x) is cheaper than multiply(x,x)).  My reply to quaz0r was about real numbers, you explained it much more concisely, thanks!
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #6 on: April 11, 2016, 01:28:06 AM »

to be clear then is what karl mentioned actually a mandelbrot-specific optimization or just a general optimization of arbitrary precision implementation?  if the latter then im sure mpfr implements it, if the former i imagine it does not.  ive seen people mention it on previous occasions as a mandelbrot-specific optimization, but it sounds like that is not the case?
Logged
claude
Fractal Bachius
*
Posts: 563



WWW
« Reply #7 on: April 11, 2016, 01:34:12 AM »

None of this is Mandelbrot-specific, MPFR provides mpfr_sqr and mpfr_mul, and the MPC library for complex numbers over MPFR provides mpc_sqr (which uses (x+y)(x-y) for the real part) and mpc_mul too.
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #8 on: April 11, 2016, 02:15:10 AM »

your benchmarks are pretty impressive for karatsuba.  im looking at my reference calculations and it looks like i had implemented it as calculating a^2 and b^2 at full precision, saving (a^2 + b^2) * 1e-6 for each iteration to use for paul's glitch detection, and then recycling the a^2 and b^2 for the (a^2 - b^2) of the next iteration of z.  i guess maybe i could instead do the karatsuba thing, and get away with calculating a^2 and b^2 at lower precision for the glitch detection?  and it would be faster?
Logged
claude
Fractal Bachius
*
Posts: 563



WWW
« Reply #9 on: April 11, 2016, 02:18:43 AM »

something to try I guess, try it and report back (I'm guessing grabbing a and b to double precision then calculating a^2+b^2 should work fine for glitch detection).
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #10 on: April 11, 2016, 07:08:14 AM »

so is karatsuba better than naive for hardware floating point as well as arbitrary precision?  in days gone by i think addition was faster than multiplication but i think they are supposed to be equal now.  here the difference comes down to 2 multiplies 1 add versus 2 adds 1 multiply, so i guess it should be a push?
Logged
claude
Fractal Bachius
*
Posts: 563



WWW
« Reply #11 on: April 11, 2016, 07:31:20 AM »

the naive way might be faster because the compiler could use SIMD to do vec3(x, y, x) * vec3(x, y, y) = vec3(x^2, y^2, xy) in one instruction.  but this is just a guess, disassembling a small snippet would tell for sure.
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #12 on: April 11, 2016, 07:55:28 AM »

hmm well personally im already using explicit vectorization, iterating N points in parallel for vector size N, so probably still a push for me i guess.  on a side note i think compiler auto-vectorization is basically utter crap / practically worthless as yet.  they can auto-vectorize the simplest of loops and inject some misc vector code like what you are talking about (though probably not even that fancy) and even then i think people have taken a look at the code that is actually being generated and it is similar performance or sometimes worse than if the compiler doesnt generate vector instructions and such.  *shrug*    undecided
Logged
Kalles Fraktaler
Fractal Senior
******
Posts: 1458



kallesfraktaler
WWW
« Reply #13 on: April 11, 2016, 09:43:51 AM »

to be clear then is what karl mentioned actually a mandelbrot-specific optimization or just a general optimization of arbitrary precision implementation?  if the latter then im sure mpfr implements it, if the former i imagine it does not.  ive seen people mention it on previous occasions as a mandelbrot-specific optimization, but it sounds like that is not the case?
I don't know about mpr but I do know that other arbitrary precision libraries do not implement the Mandelbrot optimization.
If you want to make a 100% correct multiplication of say 10 digits, you must do all the 100 operations and produce 20 digits, because the least significant digit can theoretically turn all 9s into 10s and change the most significant digit. So in those libraries, even if you specify 10 digits as precision, it will internally use 20 for a correct multiplication. But when calculating Mandelbrot such small values can be ignored, and that even in apparently non-approximation programs like fractal extreme.
But the gain is 50%, and for squaring, 75%!
Just saying...
« Last Edit: April 11, 2016, 10:33:42 AM by Kalles Fraktaler, Reason: typos... » Logged

Want to create DEEP Mandelbrot fractals 100 times faster than the commercial programs, for FREE? One hour or one minute? Three months or one day? Try Kalles Fraktaler http://www.chillheimer.de/kallesfraktaler
http://www.facebook.com/kallesfraktaler
Pages: [1]   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Choosing the squaring formula by location Theory « 1 2 3 4 5 » msltoe 73 28642 Last post January 28, 2015, 08:12:30 AM
by cKleinhuis
A processor benchmark for Mandelbulb 3d Mandelbulb 3d « 1 2 3 » ant123 44 61098 Last post December 10, 2017, 09:44:52 PM
by AtomicNixon
bignum benchmark? Programming « 1 2 » cbuchner1 18 17891 Last post November 03, 2014, 12:17:52 PM
by cbuchner1
Fractal Extreme Performance Benchmark Fractal eXtreme stardust4ever 0 9899 Last post March 21, 2012, 03:21:06 PM
by stardust4ever
'Squaring the Circles' Images Showcase (Rate My Fractal) AbstractVisionaryArt 0 1378 Last post June 03, 2014, 05:43:27 AM
by AbstractVisionaryArt

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.239 seconds with 29 queries. (Pretty URLs adds 0.016s, 2q)