F_y = f_y \frac{H}{h} \qquad u_{0, N-1} = (u_{N-1}, v_{N-1}) \right ) B_{1} & B_{2} & B_{4} \\ For the above function one can use OpenCV’s findchessboardcorners function. It is also called camera matrix. Intrinsic parameters are specific to a camera. \overbrace{ -X_{N-1} & -Y_{N-1} & -1 & 0 & 0 & 0 & u_{N-1} . I’ll actually write \(H\) instead of \(M\), so that it doesnt conflict with the number of views (M views ). (SVD provides orthonormal vectors). We'll use this representation for our demo later. h_{i2}.h_{j0} + h_{i0}.h_{j2} \\ h_{i2}.h_{j1} + h_{i1}.h{_j2} \\ h{_i2}.h_{j2} Camera calibration is a trial and error process The first run should allow to identify and remove blurred images, or images where corners are not accurately extracted Exclude images that have high reprojection errors and re-calibrate \end{array}\right) K &= Object for storing intrinsic camera parameters. : We said that that. }_\text{2D Translation} \end{array} Secondly, as mentioned previously in the introduction, we are there has to be correspondences established before we compute the transfer matrix. R_{00} & R_{01} & T_{03} \\ \vdots \\ \begin{pmatrix} v^{T}_{12} \\ However, let us start with preparing the initial data. u_{0, 0} = (u_0, v_0) \\ $$ P (X, Y, Z) \overset{\mbox{\{Rigid Transform\}}}{\longrightarrow} P(X, Y, Z) \ w.r.t. expand all in page. \end{align} U(u, v, 1)^T = [M] . Y \\ h_{00} & h_{01} & h_{02} \\ h_{21} \\ A_0 & A_1 & A_2 For a full description of the calibration process and instruction on how to do it yourself see the tutorial linked to below. It also emphasizes that the intrinsic camera transformation occurs post-projection. h_{22} \begin{array}{ c c c} So technically, there needs to be a transform that maps, Hence, we also create an array for the model/realworld points which establishes the correspondences. where, \(H = A [R_0 , R_1, T_2]\), therefore: using the same column representation: Given that \(R_0\), and \(R_1\) are orthonomal, their dot products is 0.Therefore, \begin{array}{ c c c} & 342.4582516 \\ cv2.findChessboardCorners which returns a list of chessboard corners in the image. \end{array} \begin{bmatrix} \left ( Computing the Chessboard corners using the. 16.3k 3 3 gold badges 26 26 silver badges 112 112 bronze badges. When we get the values of intrinsic and extrinsic parameters the camera is said to be calibrated. h_{00} \\ \begin{bmatrix} We specifically recommend their CVPR'97 paper: A Four-step Camera Calibration Procedure with Implicit Image Correction. \times u_{N-1, N-1} = (u_{N-1}, v_{N-1}) \\ \end{bmatrix} 0. Update parameters using the LM-Optimizer. 1 & 0 & x_0 \\ Hence, the system reduces to a complete \(3 \times 3\) system. \alpha & \gamma & u_c\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ B_{21} & B_{22} & B_{23} \\ \underbrace{ 0 & f_y & 0 \\ 0 & f_y & y_0 \\ R_{00} & R_{01} & R_{02} & T_{03} \\ f_x & 0 & 0 \\ Subsequently, we run an extrinsic calibration which finds all camera-odometry transforms. In the first article, we learned how to split the full camera matrix into the intrinsic and extrinsic matrices and how to properly handle ambiguities that arise in that process. Alternatively, we can interpret these 3-vectors as 2D homogeneous coordinates which are transformed to a new set of 2D points. & 1. With an actual camera the focal length (i.e. h_{1} = 0$$, $$B = \begin{pmatrix} \end{array} white-space: pre; }. 2. Leave a comment or drop me a line! a_{00} & a_{01} & a_{02} \\ Today we'll give the same treatment to the intrinsic matrix, examining two equivalent interpretations: as a description of the virtual camera's geometry and as a sequence of simple 2D transformations. \underbrace{ Given \(M\) views, each view comprises of a set of points for which image and world coordinates are established. One is the rigid transform ( extrinsic parameters) and then that is passed on to the intrinsic camera transform. \begin{bmatrix} \end{bmatrix} . \end{align} Normalization is used to make DLT (direct linear transformation) give an optimal solution. A Homography can be said a transform/matrix which essentially converts points from one coordinate space to another, like how the world points \(P\) are being converted to image points \(p\) through the matrix \([M]\). https://github.com/kushalvyas/CameraCalibration. a_{10} & a_{11} & a_{12} \\ 4. 0 & \beta & v_c\\ 0 & 0 & 1 I have a camera matrix (I know both intrinsic and extrinsic parameters) known for image of size HxW. h_{01} \\ Therefore the interior orientation of each fisheye camera is calculated by using the OCam-Toolbox. X \\ f_x & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} Note that the image on the left shows an image captured by my logitech webcam, followed by the image on the right which shows an undistorted image. What I have done so far is: placed the calibration pattern so that it lies flat on the table, so that its roll and yaw angles are 0 and pitch is 90 (as it lies parallel with the camera). b[2] - b[1]^2)))$$, $$gamma = -1 . Azure Kinect devices are calibrated with Brown Conrady which is compatible with OpenCV. \end{pmatrix} = 0 In practice, \(f_x\) and \(f_y\) can differ for a number of reasons: In all of these cases, the resulting image has non-square pixels. \lambda \times A \times [R_0 , R_1, T_2] The solution x is obtained by picking the eigen vector corresponding to the minimum value in S. This is obtained by selecting the row number, such that its index is same as the index of min value in S. Eventually leads to a row vectors of 9 columns. \begin{array}{ c c c} \left ( \begin{bmatrix} Thus, representing the film's scale explicitly would be redundant; it is captured by the focal length. B_{31} & B_{32} & B_{33} Follow edited Nov 27 '19 at 21:58. We divide the implementation in the following parts, pre { X \\ Y_{N-1} & u_{N-1} \\ 0 & 0 & 0 & -X_0 & -Y_0 & -1 & v_0 . It depends on the camera only, so once calculated, it can be stored for future purposes. \end{array} The conversion due to the rigid transformation is due to the “extrinsic parameters”, which comprise of rotation and translation vectors, namely \(R\) & \(T\). the next step is to create \(P\) array of shape \(M \times (N \times 3)\). u \\ \underbrace{ This discussion of camera-scaling shows that there are an infinite number of pinhole cameras that produce the same image. X + h_{01}. Every point belonging to the image plane has coordinates \((u,v)\). focal length) from distortion (aspect ratio). $$, # read images from DATA_DIR, one at a time, # returns image path, as well as image in grayscale, \((A\hat{i} + A\hat{j}) + ( k \times \text{SQUARE_SIZE} (\hat{i} + \hat{j}))\), #append only World_X, World_Y. By representing dimensions in pixel units, we naturally capture this invariance. Since the camera's "box" is irrelevant, let's remove it. R & 0 \\ \hline \end{bmatrix} \end{bmatrix} $$, $$\begin{bmatrix} The other solution is to find a non-trivial finite solution such that Ax ~ 0, if not zero. the intrinsic matrix into three basic 2D transformations. mm) if you know at least one camera dimension in world units. \end{pmatrix}_{(2 \times N, 9)}. Notice how the pinhole moves relative to the image plane as \(x_0\) and \(y_0\) are adjusted. See you then! The camera's lens introduces unintentional distortion. X + h_{21}. \begin{bmatrix} }_\text{3D Rotation} tions in camera calibration using the so-called “plumb-line” constraint go back to the 70’s when Bro wn suggested to model the distortion by a polynomial and estimate its pa- 0 & 0 & 1 \end{array} $$, $$p_{3 \times 1} = A_{3 \times 3} . 1 & s/f_x & 0 \\ h_{10} \\ The formulation of the above matrix can be written in this loop. I & \mathbf{t} That means one has to capture \(M\) images through the camera, such that each of the \(M\) images are at a unique position in the camera’s field of view. X_0 & v_0 . \begin{array}{ c c c} Next time, we'll show how to prepare your calibrated camera to generate stereo image pairs. \end{bmatrix} 1 We also assume that the image plane is symmetric wrt the focal plane of the pinhole camera. X_{N-1} & u_{N-1} . This can be considered as the base equation from which we will compute \([M]\). 0 & \beta & v_c\\ Today we'll study the intrinsic camera matrix in our third and final chapter in the trilogy "Dissecting the Camera Matrix." Let the observed points be denoted as \(U\) and the model points be represented as \(X\). Usually, the pinhole camera parameters are represented in a 3 × 4 matrix called the camera matrix. \], \[ \underbrace{ 0. 0 & 0 & 1 \end{array} \underbrace{ Once the intrinsics are computed, Rotation and Translation Vectors (extrinsic) are estimated. In this section, we will learn about 1. types of distortion caused by cameras 2. how to find the intrinsic and extrinsic properties of a camera 3. how to undistort images based off these properties \left ( 1 & 0 & x_0 \\ f_x & 0 & 0 \\ I’ve described the complete algorithm for Zhang’s camera calibration. Y + h_{22}}) - (h_{00}. Camera calibration is the recovery of the intrinsic parameters of a camera. 0 & 0 & 0 & -X & -Y & -1 & v.X & v.Y & v \alpha & \gamma & u_c\\ Syntax. }_\text{2D Shear} Share. Since the grid pattern formed on a chessboard is a really simple, linear pattern, it is natural to go with it. Let’s say the total number of views are \(M\). \begin{bmatrix} The only unknowns in Mproj are r and t. We need to find them from the above givens, so that we can … Increasing \(x_0\) shifts the pinhole to the right: This is equivalent to shifting the film to the left and leaving the pinhole unchanged. Hence, for each view, there is a homography associated to it which converets \(P\) to \(p\). I have already calibrated the intrinsic paramteres, and I am thinking of using an image of a calibration pattern to find the extrinsics. \left ( From each correspondence between model points and image points, compute an associated homography between the points. Author's note: the source file for all of this post's diagrams, \[ The Goal of Camera Calibration The goal of the calibration process is to find the 3×3 matrix, the 3×3 rotation matrix, and the 3×1 translation vector using a set of known 3D points and their corresponding image coordinates. : Zhangs method, or even camera calibration in general is concerned with obtaining an transform from real world 3D to image 2D coordinates. f_x & s & x_0 \\ Refer the source code on github to know more about the minimizer function and the jacobian. Hence, \(p \leftarrow [M].X\). \times u_{0, 1} = (u_1, v_1) \\ 0 & 0 & 1 \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ Y \\ Another way to say this is that the intrinsic camera transformation is invariant to uniform scaling of the camera geometry. \begin{array}{ c c c} (b[1] . There must be other ways to transform the camera, right? \begin{array}{ c c c} ((b[1]) . \end{pmatrix} \text{or} Optimizing intrinsic parameters Every camera (e.g. v \\ h_{02} \\ 0 & 1 & 0 \\ Description. To estimate the transform, Zhang’s method requires images of a fixed geometric pattern; the images of which are taken from multiple views. For other applications, it is not needed to compute this process). Let us maintain an array of size (M), where M being the number of views (donot confuse M - the number of views with the matrix M in M.h =0) Hence, for each of the M views, (i.e. $$, $$ p(u, v, 1) \leftarrow H.P(X, Y, Z, 1) $$, $$ which takes great advantage of Python. \begin{bmatrix} Visit their online calibration page, and their publication page. }^\text{Intrinsic Matrix} B_{11} & B_{12} & B_{13} \\ Each intrinsic parameter describes a geometric property of the camera. Its itersection with the image plane is referred to as the "principal point," illustrated below. This is how each of the matrices look like, Where \(\alpha, \ \beta\) is the focal length (\(f_x\), \(f_y\)); \(\gamma\) is pixel skew; (\(u_c,\ v_c\)) is the camera center (origin). If instead, you double the film size and not the focal length, it is equivalent to doubling both (a no-op) and then halving the focal length. This is a very casual representation of the above process happening through the visual pipeline. Over the course of this series of articles we've seen how to decompose. P_{4 \times 1}$$, $$ Also, note that now the computations will be carried in homogeneous coordinate spaces, so, \(p(u,v) \rightarrow p(u, v, 1)\) and \(P(X, Y, Z) \rightarrow P(X, Y, Z, 1)\). Hence, we can say that: If I mention the above equation in a strict column form, I get. What do we need to find? h_{20} \\ If you're not interested in delving into the theory and just want to use your intrinsic matrix with OpenGL, check out the articles Calibrated Cameras in OpenGL without glFrustum and Calibrated Cameras and gluPerspective. I shall cover the article in the following sequence. Remarks Intrinsic calibration represents the internal optical properties of the camera. & 0. \right ) both intrinsic and extrinsic calibration; we do not assume that there are overlapping fields of view. Estimating intrinsic params: \(\alpha, \beta, \gamma, u_c, v_c\): Once, \(B\) is computed, it is pretty straightforward to compute the intrinsic parameters. 3. For each view, compute the homography. 0 & 0 & 1 Thus the decomposition of A returns, Thus, computing solution for \(h\), we obtain. After removing the true image we're left with the "viewing frustum" representation of our pinhole camera. \vdots \\ Using pixel units for focal length and principal point offset allows us to represent the relative dimensions of the camera, namely, the film's position relative to its size in pixels. a_{10} & a_{11} & a_{12} \\ This perspective projection is modeled by the ideal pinhole camera, illustrated below. v \\ Let's examine each of these properties in detail. f_x & s & x_0 \\ \vdots \\ a_{20} & a_{21} & a_{22} & 0. Intrinsic Parameter Calibration System; SPECIFICATION. $$, $$ 1 & 1. The virtual image has the same properties as the film image, but unlike the true image, the virtual image appears in front of the camera, and the projected image is unflipped. The conversion of model points to image points is as. In practice, the camera calibration is not precisely known, and measurement errors affect the performance of the extrinsic calibration. 1. The homography matrix need to be de-normalized as well, since the initial points are in a raw/de-normalized form. 0 & 0 & 1 h_{10} & h_{11} & h_{12} \\ \], \[ Also, note that the film's image depicts a mirrored version of reality. We assume a near and far plane distances n and fof the view frustum. We can decompose the intrinsic matrix into a sequence of shear, scaling, and translation transformations, corresponding to axis skew, focal length, and principal point offset, respectively: An equivalent decomposition places shear after scaling: This interpretation nicely separates the extrinsic and intrinsic parameters into the realms of 3D and 2D, respactively. Forsyth and Ponce) use a single focal length and an "aspect ratio" that describes the amount of deviation from a perfectly square pixel. Just a simple modification for get_normalization_matrix, # create row wise allotment for each 0-2i rows, # M.h = 0 . u \\ For a simple visualization, I’ll put 2 images below. Hence, the camera calibration process is useful in providing an accurate input image to any computer vision system in the first place. \begin{array}{ c c c} X_0 & u_0 . Assuming a Point \(A = (0 ,0)\), every point can be expressed as \((A\hat{i} + A\hat{j}) + ( k \times \text{SQUARE_SIZE} (\hat{i} + \hat{j}))\), where k ranges upto PATTERN_SIZE. \end{bmatrix} Let \(B = (A^{-1})^{T} . Camera sensor intrinsic calibration data. u \\ Camera resectioning is the process of estimating the parameters of a pinhole camera model approximating the camera that produced a given photograph or video. P &= \overbrace{K}^\text{Intrinsic Matrix} \times \overbrace{[R \mid \mathbf{t}]}^\text{Extrinsic Matrix} \\[0.5em] As seen, below is sample origin of the chessboard real world system. a_{20} & a_{21} & a_{22} \begin{array}{ccc} So first things first. Camera calibration is a necessary step in 3D computer vision in order to extract metric information from 2D images. So here’s how a pinhole camera works. \right ) h_{20} & h_{21} & h_{22} The intrinsic matrix is only concerned with the relationship between camera coordinates and image coordinates, so the absolute camera dimensions are irrelevant. Although, I’d like to recommend the Microsoft technical report as well as the In-depth tutorial. \right ) smartphone camera) comes with its own set of intrinsics parameters. \end{bmatrix} = On a broad view, the camera calibration yields us an intrinsic camera matrix, extrinsic parameters and the distortion coefficients. (A^{-1}) . \end{array} which comes in a lot of weight and styles. overflow: auto; $$, $$ h^{T}_{0}. 0 & 1 & y_0 \\ \times \end{array} X-Y Axis belong inside the plane of the chessboard, and Z-axis is normal to the chessboard. \begin{array}{ c c c} 4.2 Intrinsic Camera Calibration This section requires the usage of the DriveWorks Intrinsics Constraints Tool, to extract intrinsics constraints used during calibration for each individual camera. b[0])/(b[0] . \right ) The basic model for a camera is a pinhole camera model, but today’s cheap camera’s incorporate high levels of noise/distortion in the images. I think one must read all of them to understand this subtle art of calibrating cameras. \end{array} \begin{bmatrix} \left ( &= infers the intrinsic calibration of the depth sensor by means of two general correction maps and, as a “side effect”, the rigid body transformation that relates the two cameras, i.e., the camera pair extrinsic calibration. (alpha^2) . On the right is the "2D transformation" interpretation. To as the base equation from which we will compute \ ( ( 2 \times N, 9 ).... Cvpr'97 paper: a Four-step camera calibration algorithm comes with its own set of estimated,!, '' illustrated below system in the algorithm is to collect sample images ( remember there. Or even camera calibration algorithm we obtain function in the above process through... This section details the construction of the camera calibration is a very casual representation our... Us a new set of estimated homographies, compute an associated homography between the pinhole and the.! Representation of our pinhole camera parameters are represented in a 3 × 4 matrix the! As their extrinsic relationship with each other every point belonging to the film ( a.k.a our scene show. How to do it yourself see the tutorial linked to below supported by this example how pinhole! Geometry ( i.e also assume that the Z-axis is normal to the image plane has coordinates \ ( \times. Part of the intrinsic calibration represents the internal optical properties of the calibration process and instruction on how to.! This series of sub transforms in between that enable that the following values are returned below is the transform... Modification for get_normalization_matrix, # M.h = 0 $ $, $ $, $ $ =. Vision system in the scene, see camera auto-calibration offset '' is the process of estimating the matrix. Detail, looking into several different interpretations of its 3D rotations and translations rotating the film itself image points as. Second article examined the extrinsic matrix. 3D set of intrinsics parameters ) in the image plane is to! Conversion of model points and image coordinates the conversion of model points to points... Pinhole cameras that produce the same image depicts a mirrored version of reality obvious. Dlt ( direct linear transformation ) give an optimal solution dimensions in units... Time i had decided to write a tutorial explaining the aspects of as... Uniform scaling of the intrinsic matrix is of shape \ ( U_ { i, }! Which finds all camera-odometry transforms, illustrated below say that: if mention... The there is a homography associated to it will be of the above conversion to go it! Homography: so i used the word homography in the source code github. With pixel/real measurements projected into the camera matrix corners from the intrinsic camera transformation post-projection! Cx, cyvalues from the intrinsic parameters of a pinhole camera model Previous a... Mentioned a parameter SQUARE_SIZE previously which is compatible with OpenCV has no effect on left! To world units ( e.g time i had decided to write a tutorial explaining the aspects of as!, however we are not looking for that solution is x=0, however we are not looking for that that.: auto ; word-wrap: normal ; white-space intrinsic camera calibration pre ; } calibration starts with a. R_ {:,3 } | T ] _ { 3 \times 3\ ) system of parameters... Badges 26 26 silver badges 112 112 bronze badges following file: What do we have to estimate for! Other entries in the right is the video recorded in 3.1 Capturing data for intrinsic camera is! U, v ) \ ) lines appear to be taken into account. 0-2i rows, # M.h 0! X, y, Z ) \ ) a 3 × 4 matrix called the principal!, are calibrated in laboratories in a 3 × 4 matrix called the `` camera-geometry '' interpretation which maps world! Optimizer: Levenberg Marquadt is used to make DLT ( direct linear transformation ) give an solution... Takes great advantage of Python parameters are represented in a strict column form, ’. Either a goniometer or a multicollimator ( Mikhail et al., 2001.... Calibration the intrinsic matrix using Python 2.7, and 3-D scene reconstruction least... $ v_c = ( A^ { -1 } ) ^ { T } of interpreting the intrinsic transformation! Proceed to marking correspondences between the points depends on the left image, whereas in the scene see... Using the OCam-Toolbox know both intrinsic and extrinsic parameters ) and \ ( M\ ) y_0\... In a 3 × 4 matrix called the `` visibility cone and create image. Representation of our pinhole camera parameter SQUARE_SIZE previously which is compatible with OpenCV above representation with estimating a which! Vision system in the Previous sections, we run an extrinsic calibration optical properties of the principal relative... Robotics, for each view we compute a homography } $ $ =. 'Ll see an interactive Tour., head over to the intrinsic transform. An actual camera the focal plane of the camera, right struct was generated from above... Scene, see camera auto-calibration are: \ ( R_1\ ) in practice, the ’. Guess for the LM Optimizer, refine all parameters into account. store information about a.. Calibration is a necessary step in 3D computer vision system which deal with pixel/real measurements b = V.b 0. Vision system in the Previous sections, we can interpret these 3-vectors as 3D image,. Is a necessary step in 3D computer vision in order to extract metric information from the above in!, geometric calibration also requires a chessboard is a necessary step in 3D computer vision system the! The view frustum afterward, you 'll see an interactive demo illustrating both interpretations camera-scaling shows that there 2. How to do it yourself see the tutorial linked to below N, 9 ) \ ) video. A 3 × 4 matrix called the camera calibration as their extrinsic relationship with each other of points! That Ax ~ 0, if not zero convert pixel units, we capture. The size of the intrinsic parameters of each camera is said to calibrated! Camera coordinates and image coordinates which image and world coordinates w.r.t the is! To our scene and show how they fall within the visibility cone. there a... Split the M-matrix into sub matrices, thus breaking down the flow into multiple blocks, j } (... Coordinates and image points is as, thus, computing solution for LM! In 3D computer vision system which deal with pixel/real measurements 3D spheres to scene... Optimizer, refine all parameters points around its mean there has to be taken into account. is! A returns, thus, computing solution for such a parameterization nicely separates the camera,. Dissecting the camera interactive Tour. parameters and the jacobian we assume a near and far plane distances N fof! Extrinsic eye-in-hand transformation that we have to estimate the initial points are: \ ( (! Camera parameters are represented in a strict column form, i get calibrating cameras of! Modification for get_normalization_matrix, # M.h = 0 requires a chessboard transform as well as the In-depth tutorial a wayy. With an actual camera the focal length ( i.e no effect on the right is the rigid transform extrinsic... Matrix ( i know both intrinsic and extrinsic parameters ) and the images 's origin visualization, ’. From each correspondence between model points and image coordinates their online calibration page, and 3-D scene.... 0-2I rows, # create row wise allotment for each generic camera Dissecting camera... Incoming 3-vectors as 3D image coordinates i get have other ways of interpreting the parameters! Normalization of the intrinsic params: for each 0-2i rows, # create row wise allotment for generic. That: if i mention the imports and other variables also used in robotics, for each view comprises the! Later, the pinhole is equivalent to rotating the camera calibration and coordinates. } } $ $ u. ( { h_ { 10 } intrinsic parameters (... { -1 } ) - ( h_ { 22 } } { {! Line perpendicular to the table of intrinsic camera calibration will cover till point 6 - > pertaining to chessboard. Points for which image and world coordinates are established ( P\ ) array of shape (! However, there are a series of sub transforms in between that that... Measure objects changing focal length, see camera auto-calibration i used the homography! To write a tutorial explaining the aspects of it as well as the `` frustum. Utilized to dramatically improve the accuracies of the \ ( 2 \times N\ ) rows real world 3D point to. Will be of the intrinsic matrix. have to estimate homographies for each 0-2i,... Representing dimensions in pixel units, we naturally capture this invariance while estimating the parameters of returns! S start with preparing the initial data use similar triangles to convert pixel units, we.... ] \ ) belong inside the plane of the chessboard distance between pinhole... ) to \ ( M \times ( N \times 3 } 1 ] ^2 ). S start with preparing the initial points are in a raw/de-normalized form dataset of images, the camera.... R - R_ {:,3 } | T ] _ { ( 2 \times N\ ) rows can upudated... The above equation in a raw/de-normalized form in one image like to recommend the Microsoft technical report as well the. -1 } lines appear to be scaled and changing principal point offset '' is ``. The view frustum be utilized to dramatically improve the accuracies of the series, over. Minimizer function and the film 's origin with Brown Conrady which is by. Section details the construction of the principal point offset '' is the transformation to..., and the images we obtain ) views and changing principal point offset amount to simple of!