You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
good afternoon, sir.
I have trouble with the Q calculating process. I see the example and rectification from your another project with no Q calculating. So I turn into your SimpleStereo library, but I can't combine the code in 006recitfyimgaes and simplestereo--_rigs.py(function like computeRectificationmaps, following rectifyImages). And I don't know where is 'Rcommon' from Class RectifiedStereoRig, no finding return in rectification.
In the end, I add some code to calculate Q, am i right?? Could you give me some tips, thanks.
1. combine your code from example with my data
`img1 = cv2.imread(r"D:\3Dendocope\cameraopen\SaveImage\patten4\left11.jpg") # Left image 2_book, 11_cup
img2 = cv2.imread(r"D:\3Dendocope\cameraopen\SaveImage\patten4\right11.jpg") # Right image
dims1 = img1.shape[::-1][1:] # Image dimensions as (width, height)
dims2 = img2.shape[::-1][1:]
# Calibration data
fs = cv2.FileStorage('calibration_data.yaml', cv2.FILE_STORAGE_READ)
left_camera_matrix = fs.getNode('left_camera_matrix').mat()
left_distortion = fs.getNode('left_distortion').mat()
right_camera_matrix = fs.getNode('right_camera_matrix').mat()
right_distortion = fs.getNode('right_distortion').mat()
R = fs.getNode('R').mat()
T = fs.getNode('T').mat()
fs.release()
A1 = left_camera_matrix
A2 = right_camera_matrix
RT1 = np.hstack((np.eye(3), np.zeros((3,1))))
RT2 = np.hstack((R, T))
distCoeffs1 = left_distortion
distCoeffs2 = right_distortion
# A1 = np.array([[ 1.32810329e+03, 0.00000000e+00, 3.51773761e+02], [0.00000000e+00, 1.33262333e+03, 1.96667235e+02], [0,0,1]]) # Left camera intrinsic matrix
# A2 = np.array([[ 1.31111469e+03, 0.00000000e+00, 3.98439439e+02], [0.00000000e+00, 1.32089333e+03, 1.45676570e+02], [0,0,1]]) # Right camera intrinsic matrix
# RT1 = np.hstack((np.eye(3), np.zeros((3,1)))) # # World origin set in first camera
# RT2 = np.array([[ 0.9995544, 0.00257785, -0.02973815, -5.96545894e+01], # np.hstack((rig.R, rig.T))
# [ -0.00171476, 0.9995776, 0.02901189, -4.85057441e-02],
# [0.02980038, -0.02894797, 0.9991366, -4.11892723e+00]])
# Distortion coefficients
# Empty because we're using digitally acquired images (no lens distortion).
# See OpenCV distortion parameters for help.
# distCoeffs1 = np.array([-3.70635340e-01, -1.50193477e+01, 5.50257757e-03, -1.57242337e-02, 3.09577129e+02])
# distCoeffs2 = np.array([-0.64824754, 0.04236286, 0.01304392, -0.03344647, 5.71009861])
# 3x4 camera projection matrices
Po1 = A1.dot(RT1)
Po2 = A2.dot(RT2)
F = rectification.getFundamentalMatrixFromProjections(Po1, Po2)
# ANALYTICAL RECTIFICATION to get the **rectification homographies that minimize distortion**
Rectify1, Rectify2 = rectification.getDirectRectifications(A1, A2, RT1, RT2, dims1, dims2, F)
# Final rectified image dimensions (common to both images)
destDims = dims1
# Get fitting affine transformation to fit the images into the frame 计算仿射变换矩阵,调整图像边界
# Affine transformations do not introduce perspective distortion 仿射变换不会引入透视畸变
Fit = rectification.getFittingMatrix(A1, A2, Rectify1, Rectify2, dims1, dims2, distCoeffs1, distCoeffs2)
# Compute maps with OpenCV considering rectifications, fitting transformations and lens distortion
# These maps can be stored and applied to rectify any image pair of the same stereo rig
mapx1, mapy1 = cv2.initUndistortRectifyMap(A1, distCoeffs1, Rectify1.dot(A1), Fit, destDims, cv2.CV_32FC1)
mapx2, mapy2 = cv2.initUndistortRectifyMap(A2, distCoeffs2, Rectify2.dot(A2), Fit, destDims, cv2.CV_32FC1)
# Apply final transformation to images
img1_rect = cv2.remap(img1, mapx1, mapy1, interpolation=cv2.INTER_LINEAR)
img2_rect = cv2.remap(img2, mapx2, mapy2, interpolation=cv2.INTER_LINEAR)`
2. your code in _rigs.py
` self.K1 = Fit.dot(self.rectHomography1).dot(self.intrinsic1).dot(self.Rcommon.T)
self.K2 = Fit.dot(self.rectHomography2).dot(self.intrinsic2.dot(self.R)).dot(self.Rcommon.T)
# OpenCV requires the final rotations applied
R1 = self.Rcommon
R2 = self.Rcommon.dot(self.R.T)
# Recompute final maps considering fitting transformations too
self.mapx1, self.mapy1 = cv2.initUndistortRectifyMap(self.intrinsic1, self.distCoeffs1, R1, self.K1, destDims, cv2.CV_32FC1)
self.mapx2, self.mapy2 = cv2.initUndistortRectifyMap(self.intrinsic2, self.distCoeffs2, R2, self.K2, destDims, cv2.CV_32FC1)`
I compare (def computeRectificationMaps) in the _rigs.py and example.py(you can the see the up code) there. Is the self.K1/k2 == Rectify1/2.dot(A1/A2)?? and what does R1 = self.Rcommon, R2 = self.Rcommon.dot(self.R.T) represent???
3. def(get3DPoints(self, disparityMap)-----calculating Q
` b = self.getBaseline()
fx = self.K1[0,0]
fy = self.K2[1,1]
cx1 = self.K1[0,2]
cx2 = self.K2[0,2]
a1 = self.K1[0,1]
a2 = self.K2[0,1]
cy = self.K1[1,2]
Can I use the Rectify1.dot(A1), Rectify2.dot(A2) instead of self.K1 and self.K2 in the up code combine my data(code from your example.py) to calculate Q?? I need to use Q to get depth from disparity map using rectified images.
I try to combine these code from _rigs.py to get Q, the following code is what I make. Could you take a look and give me some tips??
` Po1 = A1.dot(RT1)
Po2 = A2.dot(RT2)
C1 = np.zeros(3) # World origin is set in camera 1.
C2 = -np.linalg.inv(Po2[:, :3]).dot(Po2[:, 3])
baseline = np.linalg.norm(C2)
K1 = Rectify1.dot(A1)#or K1 = Rectify1.dot(Po1)
K2 = Rectify2.dot(A2)#or K2 = Rectify2.dot(Po2)
good afternoon, sir.
I have trouble with the Q calculating process. I see the example and rectification from your another project with no Q calculating. So I turn into your SimpleStereo library, but I can't combine the code in 006recitfyimgaes and simplestereo--_rigs.py(function like computeRectificationmaps, following rectifyImages). And I don't know where is 'Rcommon' from Class RectifiedStereoRig, no finding return in rectification.
In the end, I add some code to calculate Q, am i right?? Could you give me some tips, thanks.
1. combine your code from example with my data
`img1 = cv2.imread(r"D:\3Dendocope\cameraopen\SaveImage\patten4\left11.jpg") # Left image 2_book, 11_cup
img2 = cv2.imread(r"D:\3Dendocope\cameraopen\SaveImage\patten4\right11.jpg") # Right image
dims1 = img1.shape[::-1][1:] # Image dimensions as (width, height)
dims2 = img2.shape[::-1][1:]
2. your code in _rigs.py
` self.K1 = Fit.dot(self.rectHomography1).dot(self.intrinsic1).dot(self.Rcommon.T)
self.K2 = Fit.dot(self.rectHomography2).dot(self.intrinsic2.dot(self.R)).dot(self.Rcommon.T)
I compare (def computeRectificationMaps) in the _rigs.py and example.py(you can the see the up code) there. Is the self.K1/k2 == Rectify1/2.dot(A1/A2)?? and what does R1 = self.Rcommon, R2 = self.Rcommon.dot(self.R.T) represent???
3. def(get3DPoints(self, disparityMap)-----calculating Q
` b = self.getBaseline()
fx = self.K1[0,0]
fy = self.K2[1,1]
cx1 = self.K1[0,2]
cx2 = self.K2[0,2]
a1 = self.K1[0,1]
a2 = self.K2[0,1]
cy = self.K1[1,2]
Can I use the Rectify1.dot(A1), Rectify2.dot(A2) instead of self.K1 and self.K2 in the up code combine my data(code from your example.py) to calculate Q?? I need to use Q to get depth from disparity map using rectified images.
I try to combine these code from _rigs.py to get Q, the following code is what I make. Could you take a look and give me some tips??
` Po1 = A1.dot(RT1)
Po2 = A2.dot(RT2)
C1 = np.zeros(3) # World origin is set in camera 1.
C2 = -np.linalg.inv(Po2[:, :3]).dot(Po2[:, 3])
baseline = np.linalg.norm(C2)
K1 = Rectify1.dot(A1)#or K1 = Rectify1.dot(Po1)
K2 = Rectify2.dot(A2)#or K2 = Rectify2.dot(Po2)
b = baseline
fx = K1[0, 0]
fy = K2[1, 1]
cx1 = K1[0, 2]
cx2 = K2[0, 2]
a1 = K1[0, 1]
a2 = K2[0, 1]
cy = K1[1, 2]
Q = np.eye(4, dtype='float64')
Q[0, 1] = -a1 / fy
Q[0, 3] = a1 * cy / fy - cx1
Q[1, 1] = fx / fy
Q[1, 3] = -cy * fx / fy
Q[2, 2] = 0
Q[2, 3] = -fx
Q[3, 1] = (a2 - a1) / (fy * b)
Q[3, 2] = 1 / b
Q[3, 3] = ((a1 - a2) * cy + (cx2 - cx1) * fy) / (fy * b)`
Hopeful to your reply, sir!!!
The text was updated successfully, but these errors were encountered: