Logo by stardust4ever - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Support us via Flattr FLATTR Link
 
*
Welcome, Guest. Please login or register. March 28, 2024, 09:56:47 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1]   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Camera maths - Help please!  (Read 506 times)
0 Members and 1 Guest are viewing this topic.
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« on: July 23, 2015, 01:38:45 PM »

I've gone quite far, but I come to the point where I have no idea why something that should work in practice doesn't.

Goal: External handheld camera composited with Fragmentarium-rendered fractals.
Process:
- Shoot
- Track in Boujou
- Export camera as TXT (3x3 camera rotation matrix + vec3 position data)
- Put this as a const array in Frag (separate mat3 rotation array and vec3 pos array)
- Access the array rows based on frame numbers (in fact I used a uniform float slider animated linearly across time to work around imperfect frames addressing when based on time*framerate expression)
- Set fragmentarium camera accordingly
- Render out
- composite (in Premiere or any other app)

This is what I got and I am wrapping my head around for the last 3 days to learn why the camera from fragmentarium won't match the original.


<a href="https://vimeo.com/moogaloop.swf?clip_id=134293286&amp;server=vimeo.com&amp;fullscreen=1&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA" target="_blank">https://vimeo.com/moogaloop.swf?clip_id=134293286&amp;server=vimeo.com&amp;fullscreen=1&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA</a>

Maths can be either good or bad. It can be halfway good, right?
You can't blame it also on insufficient data resolutions as all positions have 10 or 12 digits precision.


Here's what I know:

  • Tracking data is 100% accurate as I verified track in boujou and it also holds while imported to AfterEffects
  • I checked a few approaches: exporting matrices, exporting static euler angles, moving euler angles and going with a camera through AfterEffects. All result in same mismatches.
  • I believe I have my maths right, but I need someone to verify this as I may not be 100% right. Im just stuck and don't know where to go from there. Maybe I just do a stupid mistake and Im blind to it.

My coordinate system is right-handed, Z pointing upwards and Y pointing forward, X pointing to the right.
Here's how camera data is structured.
Sample of data from Boujou with headers (just to give you understanding of how it's described):
Code:
# boujou export: text
# Copyright (c) 2009, Vicon Motion Systems
# boujou Version: 5.0.1 49900
# Creation date : Thu Jul 23 12:13:25 2015
# The image sequence file name was /Users/Paco/Desktop/TRacking/Footage/AAAA6057.MOV
# boujou frame 0 is image sequence file 0
# boujou frame 249 is image sequence file 249
# One boujou frame for every image sequence frame
# Exporting camera data for boujou frames 0 to 249
# First boujou frame indexed to animation frame 1

#The Camera (One line per time frame)
#Image Size 1920 1080
#Filmback Size 36 20.25
#Line Format: Camera Rotation Matrix (9 numbers - 1st row, 2nd row, 3rd row) Camera Translation (3 numbers) Focal Length (mm)
#rotation applied before translation
#R(0,0) R(0,1) R(0,2) R(1,0) R(1,1) R(1,2) R(2,0) R(2,1) R(2,2) Tx Ty Tz F(mm)
0.663026147349 -0.747853717219 -0.033333850271 -0.348748512798 -0.269176434522 -0.897729648569 0.662397767775 0.606843360959 -0.439283886009 -2.774962905268 -1.750638150532 1.534247204442 25.674514
0.660936880844 -0.749591107736 -0.035715693238 -0.351836510813 -0.267484052424 -0.897030295674 0.662852554645 0.605446490536 -0.440523595167 -2.772439108285 -1.749194060190 1.537645566714 25.674514
0.659828739397 -0.750512283653 -0.036842187127 -0.351650913278 -0.265088198736 -0.897813946250 0.664053966072 0.605359033118 -0.438831141974 -2.773846044461 -1.748700011027 1.537922082660 25.674514
0.658998995239 -0.751217972186 -0.037307942015 -0.352224544644 -0.264398401269 -0.897792490254 0.664573693749 0.604785121899 -0.438835688962 -2.772464665984 -1.749206942765 1.534491852473 25.674514
0.656638760299 -0.753211772356 -0.038698377977 -0.354782632433 -0.263201263587 -0.897136766926 0.665548512279 0.602824286863 -0.440054606806 -2.769924828617 -1.749933083429 1.535866913227 25.674514
0.654052549436 -0.755453530972 -0.038797230813 -0.355054518019 -0.261300668771 -0.897584675523 0.667945770112 0.600842677447 -0.439131557904 -2.766813396980 -1.748345004429 1.536441770413 25.674514
0.652479616903 -0.756858549580 -0.037887800847 -0.353853615148 -0.260079470259 -0.898413205711 0.670117876625 0.599603039583 -0.437513687044 -2.766718225801 -1.749131303252 1.535711352046 25.674514
0.650062470043 -0.758903613280 -0.038523898978 -0.355221351508 -0.258676966089 -0.898278363675 0.671741450615 0.597621563338 -0.437734954693 -2.765505040353 -1.749549835259 1.532746813238 25.674514
0.647093804549 -0.761332874859 -0.040519893549 -0.358198875094 -0.256674549898 -0.897670174015 0.673025388820 0.595390988422 -0.438801090371 -2.764800831837 -1.747220662126 1.533190992750 25.674514
0.645933673069 -0.762227937198 -0.042169440968 -0.359177593629 -0.254705294704 -0.897840001940 0.673617952723 0.595091408613 -0.438297922838 -2.762317965746 -1.746147663167 1.535634810674 25.674514

This is converted to fragestible data:
Code:
const mat3 iCamDataDir[250] = mat3[250](
mat3(0.6630261473,-0.7478537172,-0.03333385027,-0.3487485128,-0.2691764345,-0.8977296486,0.6623977678,0.606843361,-0.439283886),
mat3(0.6609368808,-0.7495911077,-0.03571569324,-0.3518365108,-0.2674840524,-0.8970302957,0.6628525546,0.6054464905,-0.4405235952),
mat3(0.6598287394,-0.7505122837,-0.03684218713,-0.3516509133,-0.2650881987,-0.8978139463,0.6640539661,0.6053590331,-0.438831142),
mat3(0.6589989952,-0.7512179722,-0.03730794202,-0.3522245446,-0.2643984013,-0.8977924903,0.6645736937,0.6047851219,-0.438835689),
mat3(0.6566387603,-0.7532117724,-0.03869837798,-0.3547826324,-0.2632012636,-0.8971367669,0.6655485123,0.6028242869,-0.4400546068),


const vec iCamDataPos[250] = vec3[250](
vec3(-2.774962905,-1.750638151,1.534247204),
vec3(-2.772439108,-1.74919406,1.537645567),
vec3(-2.773846044,-1.748700011,1.537922083),
vec3(-2.772464666,-1.749206943,1.534491852),
vec3(-2.769924829,-1.749933083,1.535866913),

Here's how I retrieve camera positions (this is coming from 3D.frag and sits within #vertex section)
Not sure why, but I needed to *(-1) the up vector to match (can this be wrong?)

Code:
// This is the file with coordinates and orientations
#include "CameraData_2.frag"
vec3 getCameraPos(){
// addressing array rows based on frame number. needed to add animated uniform float casted to int because tim*FPS was not reliable
vec3 cp = iCamDataPos[int(ceil(Frame))];
cp = (cp + iCamPosOffs) * iCamPosMod * iCamScale;
return cp;
}

vec3 getCameraRight(){
mat3 tm = iCamDataDir[int(ceil(Frame))];
return vec3(tm[0]) * iCamDirMod;
}

vec3 getCameraUp(){
mat3 tm = iCamDataDir[int(ceil(Frame))];
return -vec3(tm[1]) * iCamDirMod;
}

vec3 getCameraFw(){
mat3 tm = iCamDataDir[int(ceil(Frame))];
return vec3(tm[2]) * iCamDirMod;
}


I understand from that post That I can use matrix rows directly as normalized axis vectors.
http://www.gamedev.net/topic/666236-converting-axis-angles-forward-right-up-to-euler/#entry5213778

Further within #vertex main() I have:
iCamWeight is just a float to enable and mixin my imported camera positions. It is set to 1.
Adding normalize() to Dir/Up/Forward won't change anything as I belive matrix data is already normalized.
Code:
vec3 Upn;

if (iCamWeight>0){
from =  getCameraPos()* iCamWeight + Eye * (1-iCamWeight);
Dir = getCameraFw() * iCamWeight + normalize(Target - Eye) * (1-iCamWeight);
Upn = getCameraUp() * iCamWeight + normalize(Up) * (1-iCamWeight);
UpOrtho = normalize(Upn-dot(Dir,Upn)*Dir);
Right = getCameraRight();

} else {
from = Eye + Ey_;
Dir = normalize((Target+Targe_) - (Eye+Ey_));
Upn = normalize(Up);
UpOrtho = normalize( Upn-dot(Dir,Upn)*Dir );
Right = normalize(cross(Dir,UpOrtho));
}

That's pretty much it. What's wrong here?

I attach frags. To make get it up and running select Default, apply, then select range preset and apply.
Scrubbing in timeline animation mode should move the camera.

To test it against the source one needs to render and composite externally (I did manage to pull in single-frame background, but did not get Frag to update it per frame. I'll be happy to share backplates if anyone needs that. But hopefully you can help by looking at maths and the code solely with a fresh eye.

T H A N K    Y O U

* MatchmoveFiles.zip (42.74 KB - downloaded 28 times.)
« Last Edit: July 23, 2015, 02:00:55 PM by Patryk Kizny » Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
kram1032
Fractal Senior
******
Posts: 1863


« Reply #1 on: July 23, 2015, 02:47:00 PM »

Hmm. Well, I'm no expert at this at all. But did you give Blender a try for this? Their camera tracking is becoming better almost by the minute.
Though I'm not sure whether it'll serve you with compositing in the same way After Effects would, so Iunno.
Logged
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« Reply #2 on: July 23, 2015, 03:01:50 PM »

Looks like I spotted a flaw. The matrix order coming from boujou is by row and I assumed and declared matrix by column.
Fixed.

Second thing - should I be doing:

Code:
vec3 getCameraFw(){
mat3 rm = iCamDataDir[int(ceil(Frame))];
return vec3(rm[2]) // second column of rotation matrix
}

or

Code:
vec3 getCameraFw(){
mat3 rm = iCamDataDir[int(ceil(Frame))];
return rm * vec3(0,1,0); // rotation matrix multiplied by forward vector
}



* CameraData_3.frag (42.61 KB - downloaded 24 times.)
Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
cKleinhuis
Administrator
Fractal Senior
*******
Posts: 7044


formerly known as 'Trifox'


WWW
« Reply #3 on: July 23, 2015, 03:16:26 PM »

way cool, you create the camera angle based upon incoming webcam ? niceeeeeeeeeeeeeeeeeee!
Logged

---

divide and conquer - iterate and rule - chaos is No random!
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« Reply #4 on: July 23, 2015, 03:20:10 PM »

way cool, you create the camera angle based upon incoming webcam ? niceeeeeeeeeeeeeeeeeee!

It's a DSLR camera tracked in boujou. But yes. Way cool, but does not want to work as I expected.
I feel dumb whenever I have to use Maths.
Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #5 on: July 23, 2015, 03:33:01 PM »

wow, embedding a fractal object in the studio floor, don't trip on that wink seriously  afro you've made some great progress, I can't wait to see what you come up with for the demo reel shocked
I think this is a first for Fragmentarium! I haven't seen anything quite like this. KEEP GOING!!! you're results so far are awesome.
It took me a looong time for me to get path occlusion working, just keep at it and the secrets will reveal themselves cheesy sometimes you have to put it down and let the observations percolate in your head for a while.

the path resolution in the live video I think will be more detailed than the path in Fragmentarium, need more samples to feed into Fragmentarium or smoother motion wrt the hand held cam?

edit:just a thought, if you make iCamDataPos a slider in Fragmentarium it will become magically available to the javascript engine allowing you to set it there, you will also be able to monitor values in the GUI as they transition from one pos to the next wink
(put it in the [Camera] group)
« Last Edit: July 23, 2015, 03:50:44 PM by 3dickulus, Reason: thinking » Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« Reply #6 on: July 23, 2015, 06:57:58 PM »

Guys, thank you for the encouragement.
Anyone can help solving the thing? Looks like the matrix order was right anyways, because when I reveres it (row-col swap) I wasn't able to get any results that would make sense. And still it does not work as expected. Can one look into my camera maths?
Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
claude
Fractal Bachius
*
Posts: 563



WWW
« Reply #7 on: July 23, 2015, 08:36:14 PM »

I tried the frags, but my driver doesn't like it (NVIDIA GTX550Ti):
Code:
Internal error: assembly compile error for fragment shader at offset 407518:
-- error message --
line 16493, column 1:  error: too many instructions
...(18k+ lines of GPU asm trimmed)...
# 18984 instructions, 38 R-regs

Looking at the video it feels like the rotation is fine, but the translation is messed up (it looks a bit stuck to the camera, rather than stuck to the floor).  Could be a lot of things (real vs virtual camera field of view, real vs virtual scene scale), but to narrow it down I suggest making a virtual scene with a simple wireframe cuboid that should line up exactly with the green crosses.
Logged
eiffie
Guest
« Reply #8 on: July 23, 2015, 11:19:11 PM »

Unfortunately I can't view the video to see if this is correct but the export says the rotation is applied before translation. If that is true then you might have to rotate the position data back.
Logged
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« Reply #9 on: July 24, 2015, 01:52:24 PM »

Guys! We nailed it.
There were a couple of things that made a mismatch.
I don't know the reason yet, but already got a solution.

This is very weird, but in order to make things match I needed to:
1) Scale the scene by the factor of 2.0
2) The camera FOV that I added to renderer for some reason operates on vertical film dimension instead of horizontal. I'll have to look into that.

And to figure it out I needed to follow Claude's suggestion and added a floor tile.
Then it was apparent I have a perspective mismatch.

Now there's a lot of coding optimization because I believe the amount of stuff I added to #vertex could be an overkill.

Made my day.
Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« Reply #10 on: July 24, 2015, 02:03:50 PM »

<a href="https://vimeo.com/moogaloop.swf?clip_id=134401949&amp;server=vimeo.com&amp;fullscreen=1&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA" target="_blank">https://vimeo.com/moogaloop.swf?clip_id=134401949&amp;server=vimeo.com&amp;fullscreen=1&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=01AAEA</a>
Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
cKleinhuis
Administrator
Fractal Senior
*******
Posts: 7044


formerly known as 'Trifox'


WWW
« Reply #11 on: July 24, 2015, 02:20:48 PM »

 Repeating Zooming Self-Silimilar Thumb Up, by Craig Repeating Zooming Self-Silimilar Thumb Up, by Craig Repeating Zooming Self-Silimilar Thumb Up, by Craig
Logged

---

divide and conquer - iterate and rule - chaos is No random!
Syntopia
Fractal Molossus
**
Posts: 681



syntopiadk
WWW
« Reply #12 on: July 25, 2015, 12:01:47 AM »

Very nice! You could also use an transparent ground plane (such that shadows can be properly composited on your video).

You could also capture a panoramic view of your office, and do IBL lighting and reflections, this could be very convincing: http://blog.hvidtfeldts.net/index.php/2012/10/image-based-lighting/ (these HDR panoramas are not easy to make though).
Logged
Patryk Kizny
Global Moderator
Fractal Fertilizer
******
Posts: 372



kizny
WWW
« Reply #13 on: July 25, 2015, 02:23:38 AM »

Haha, you read my mind. Both things are on my todo list.
But in order to implement that I need to restructure the raytracer seriously (in the middle of that).
Current structure has #post section applied very much on top of the pipeline, so it does not allow to exclude background from processing etc…
I'm implementing a bit more complex struct to be passed upwards the pipeline, that essentially includes:

Struct SColor {
vec3 RGB;
vec3 Tag;
float Z; //depth
float A; //alpha
bool DoPost;
}

This essentially allows me to decide which parts of the image coming from trace() and color() will be postprocessed.
Trace also color-tags scene content and adds VFX-like scaled Z-depth.
Entire #post section is moved from Buffershader to 3D.frag
Finally I already also added flat style bitmap background.

I already implemented most of this and it seems to work, but I broke something in the middle because it stopped accumulating subframes.
Bummer. Need to spot it now and then it should be fine.

Then I head on to implement matte object (that will receive shadows) and IBL.
Logged

Visual Artist, Director & Cinematographer specialized in emerging imaging techniques.
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #14 on: July 25, 2015, 05:05:42 AM »

very impressive! nicely done!  A Beer Cup A Beer Cup A Beer Cup
Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
Pages: [1]   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
coordinates of deep zooms and multiple precision maths routines Programming jwm-art 4 3372 Last post October 26, 2010, 03:59:08 PM
by jwm-art
Sacred Maths Mandelbulb3D Gallery Tabasco Raremaster 0 841 Last post November 07, 2010, 06:29:48 AM
by Tabasco Raremaster
Understanding the maths Theory JodyVL 9 4013 Last post June 26, 2011, 05:14:30 PM
by DarkBeam
Great Maths Channels General Discussion kram1032 1 1245 Last post December 07, 2015, 06:30:54 AM
by M Benesi
Camera Positioning HELP Help & Support Madwerks 1 208 Last post April 30, 2016, 01:25:21 AM
by JohnVV

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.189 seconds with 25 queries. (Pretty URLs adds 0.011s, 2q)