computer vision - camera extrinsic calibration -


i have fisheye camera, have calibrated. need calculate camera pose w.r.t checkerboard using single image of said checkerboard,the intrinsic parameters, , size of squares of checkerboards. unfortunately many calibration libraries first calculate extrinsic parameters set of images , intrinsic parameters, "inverse" procedure of want. of course can put checkerboard image inside set of other images used calibration , run calib procedure again, it's tedious, , moreover, can't use checkerboard of different size ones used instrinsic calibration. can point me in right direction?

edit: after reading francesco's answer, realized didn't explain mean calibrating camera. problem begins fact don't have classic intrinsic parameters matrix (so can't use method francesco described).in fact calibrated fisheye camera scaramuzza's procedure (https://sites.google.com/site/scarabotix/ocamcalib-toolbox), finds polynom maps 3d world points pixel coordinates( or, alternatively, polynom backprojects pixels unit sphere). now, think these information enough find camera pose w.r.t. chessboard, i'm not sure how proceed.

the solvepnp procedure calculates extrinsic pose chess board (cb) in camera coordinates. opencv added fisheye library 3d reconstruction module accommodate significant distortions in cameras large field of view. of course, if intrinsic matrix or transformation not classical intrinsic matrix have modify pnp:

  1. undo whatever projection did
  2. now have so-called normalized camera intrinsic matrix effect eliminated.

    k*[u,v,1]t = r|t * [x, y, z, 1]t

the way solve write expression k first:

k=r20*x+r21*y+r22*z+tz 

then use above expression in

k*u = r00*x+r01*y+r02*z+tx k*v = r10*x+r11*y+r12*z+tx 

you can rearrange terms ax=0, subject |x|=1, unknown

x=[r00, r01, r02, tx, r10, r11, r12, ty, r20, r21, r22, tz]t

and a, b composed of known u, v, x, y, z - pixel , cb corner coordinates;

then solve x=last column of v, a=ulvt, , assemble rotation , translation matrices x. there few ‘messy’ steps typical kind of processing:

a. ensure got real rotation matrix - perform orthogonal procrustes on r2 = uvt, r=ulvt

b. calculate scale factor scl=sum(r2(i,j)/r(i,j))/9;

c. update translation vector t2=scl*t , check tz>0; if negative invert t , negate r;

now, r2, t2 give starting point non linear algorithm optimization such levenberg marquardt. required because previous linear step optimizes algebraic error of parameters while non-linear 1 optimizes correct metrics such squared error in pixel distances. however, if don’t want follow these steps can take advantage of fish-eye library of opencv.


Comments

Popular posts from this blog

asp.net mvc - SSO between MVCForum and Umbraco7 -

Python Tkinter keyboard using bind -

ubuntu - Selenium Node Not Connecting to Hub, Not Opening Port -