Logo by KRAFTWERK - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Did you know ? you can use LaTex inside Postings on fractalforums.com!
 
*
Welcome, Guest. Please login or register. March 28, 2024, 12:20:58 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1]   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: fastest way to write to the screen with a GPU  (Read 5112 times)
0 Members and 1 Guest are viewing this topic.
0xbeefc0ffee
Forums Freshman
**
Posts: 17



« on: February 20, 2017, 11:08:32 PM »

I've been recently learning OpenCL and I've been using OpenGL for a while.

I have this one program I recently posted about that I want to make run on the GPU
http://www.fractalforums.com/index.php?action=gallery;sa=view;id=20076

What's the fastest way to write directly to pixels (no rasterization)?

Is it fastest to just use SDL and write to pixels from the CPU like my program already does?


Logged
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #1 on: February 21, 2017, 01:00:07 AM »

Make an OpenGL texture, get an OpenCL buffer ptr for it (OpenGL buffer sharing), lock it, write to it from OpenCL, unlock it, and display using OpenGL.

https://software.intel.com/en-us/articles/opencl-and-opengl-interoperability-tutorial


I'm glad to hear of more people looking past the CUDA hype; screw proprietary APIs!
Logged

claude
Fractal Bachius
*
Posts: 563



WWW
« Reply #2 on: February 21, 2017, 01:06:22 AM »

You may also want to implement a fallback method that copies via the CPU, for OpenCL devices that don't support OpenGL sharing (for example, for testing on machines with very good CPU but not-so-good GPU).
Logged
0xbeefc0ffee
Forums Freshman
**
Posts: 17



« Reply #3 on: February 21, 2017, 01:25:02 AM »

Make an OpenGL texture, get an OpenCL buffer ptr for it (OpenGL buffer sharing), lock it, write to it from OpenCL, unlock it, and display using OpenGL.

https://software.intel.com/en-us/articles/opencl-and-opengl-interoperability-tutorial


I'm glad to hear of more people looking past the CUDA hype; screw proprietary APIs!

Thank you it's nice to hear a straightforward answer. I spent a lot of time being confused by some ancient documentation

I'll admit that as a freetard myself OpenCL/GL were the obvious choices for me, but Nvidia's "GPU Gems" series  was so pretty it made me have doubts
Logged
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #4 on: February 21, 2017, 01:52:34 AM »

Well actually CUDA is great, it has a lot of extra features much sooner than OpenCL and obviously it works really well on Nvidia devices because of their early and strong push, but AMD make awesome GPUs too (and Intel's iGPU is pretty great in many ways), so it's basically unacceptable not to be able to use them with the same language / API.

Besides that, I actually like how OpenCL is structured, and if Nvidia+Apple ever get around to supporting OpenCL 2.0 it will be a glorious, glorious day. Apple is the biggest offender here, for heading the OpenCL consortium and then (deliberately?) abandoning it for their Metal API; it really smacks of subterfuge. Google need to get off their asses and make it standard in Android, too.
Logged

quaz0r
Fractal Molossus
**
Posts: 652



« Reply #5 on: February 21, 2017, 03:27:06 AM »

opencl 2.2 sounds really sweet actually, you get modern C++ and everything as i recall.  of course, we will likely not see support for opencl >2.0 within our lifetimes, because obviously the corporate MO is to push their own proprietary crap to try to lock customers into staying with their products, and not just push their own proprietary thing but actively and purposely withhold from providing good open standards support.  it is a real problem and a real shame, and people need to get mad and let companies know that they will not continue to use their products if they continue these sorts of business practices, as their own corporate profit is the only language they speak and the only message that might affect their behavior.

Logged
ker2x
Fractal Molossus
**
Posts: 795


WWW
« Reply #6 on: June 25, 2017, 11:41:41 AM »

To be honnest i don't care much about interoperability, i use openCL because i find it easier to use than cuda
CUDA have more feature but i don't understand them  embarrass
Apple is the new microsoft, don't expect anything but proprietary clusterf*ck
By the way, anyone tried Vulkan ?

About the original question : i don't do it.
I do the heavy computation on GPU, then move it back to CPU to eventually do some more stuff and let the whatever api/library i'm using do the drawing in an old fashioned way.
Unless you do some high FPS / low latency stuff (which you probably don't if you're computing fractal) it doesn't matter much performance-wise.
Logged

often times... there are other approaches which are kinda crappy until you put them in the context of parallel machines
(en) http://www.blog-gpgpu.com/ , (fr) http://www.keru.org/ ,
Sysadmin & DBA @ http://www.over-blog.com/
rethread
Forums Freshman
**
Posts: 11


« Reply #7 on: July 14, 2017, 02:12:36 PM »

Containing my silly urges.... Just to add a couple of points here.

a) if you dont do you graphics gen in a pixel shader (its the shader code on the gpu, NOT THE OPENGLSYSTEM CODE!!!), it wont get the hardware accelleration.

b) with cuda and companies protecting themselves, its of course we are surrounded with a huge conspiracy with gpus coming out way under speed with what they could,  and there is reasons for this.   and its disturbing to me, and im getting a headache again.

Logged
0xbeefc0ffee
Forums Freshman
**
Posts: 17



« Reply #8 on: July 16, 2017, 01:08:38 AM »

Containing my silly urges.... Just to add a couple of points here.

a) if you dont do you graphics gen in a pixel shader (its the shader code on the gpu, NOT THE OPENGLSYSTEM CODE!!!), it wont get the hardware accelleration.

b) with cuda and companies protecting themselves, its of course we are surrounded with a huge conspiracy with gpus coming out way under speed with what they could,  and there is reasons for this.   and its disturbing to me, and im getting a headache again.



That's exactly right.

Your bump has reminded me that since I first posted this thread I think I figured out the answers to my original problem. I'll elaborate here on what I found in case anybody gets here from Google:

To get the performance gains from a GPU when you want to make something fast and pixelwise, you can't do it per pixel (operate on the integer coordinates of the screen). That's kind of an antiquated way of doing things. GPUs are designed to work with floating point values and are better at computing things to go on the screen.

To make it work, you set up OpenGL so that there is a rectangle directly in front of the screen.

Then you can just compute your ray tracer, fractal, etc. off of the interpolated coordinates that OpenGL will give you in the frag shader.

You don't get to operate off of the integer pixel coordinates and return RGB values if you want to be fast because floats are the language of the GPU!

My error when I posted this thread was trying to avoid rasterization to go fast.

The trick is to think of the screen as continuous during your computation (which is done in your shader program) and let OpenGL rasterize it for you.

Really fragments are pixels when you are just looking at a rectangle that is parallel to your screen in an orthogonal projection (I.e., the lack of a projection.)

Floats are the language of the GPU!
Logged
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #9 on: August 07, 2017, 06:41:10 PM »

By the way, anyone tried Vulkan ?
Yes, just giving it a go now, slowly getting it, cube.c is around 4k loc and cube.cpp is around 3K loc, which seems like a lot of code just to get a textured cube rendered, but fortunately there are some large brains that are busy abstracting the OS independence out of the way to reduce this by 10 fold. Some interesting reading here http://blog.qt.io/blog/2017/06/06/vulkan-support-qt-5-10-part-1/

Unless you do some high FPS / low latency stuff (which you probably don't if you're computing fractal) it doesn't matter much performance-wise.
I agree, for high quality, as in the film industry and art world, accumulation of many frames and/or long render times will, most likely, always be the case, but the "fastest" way will be reserved for GUI feedback because tolerable navigation requires it.
Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
Pages: [1]   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Fastest formula? Mandelbulb Implementation M Benesi 0 3015 Last post June 28, 2010, 07:10:41 AM
by M Benesi
How to write a formula in ChaosPro General Discussion Krumel 1 3204 Last post September 08, 2010, 09:29:06 PM
by quaternion
Write your own formulas? Mandelbulb 3d s31415 5 5999 Last post June 06, 2012, 08:58:11 PM
by stringbean5
Proper settings for Mandelbulb 3D to run fastest? Help & Support Chordus 0 433 Last post December 29, 2015, 11:20:06 PM
by Chordus
Fastest mandelbrot software? Perturbation method? Fractal Programs « 1 2 » recursiveidentity 25 19483 Last post August 02, 2017, 04:19:11 AM
by Dinkydau

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.176 seconds with 27 queries. (Pretty URLs adds 0.009s, 2q)