Good night @miguelriemoliveira and @danifpdra ,
I am having a problem understanding how to pass the lidar points to the image. Right now, I found my extrinsic and intrinsic values, the intrinsic from the camera_info and the extrinsic from the urdf.
It gave me this values:
self.cameraIntrinsic = np.array([[1060.9338813153618, 0.0, 640.5, -74.26537169207533],
[0.0, 1060.9338813153618, 360.5, 0.0],
[0.0, 0.0, 1.0, 0.0]])
# camera extrinsic from the lidar to the camera (back and front have different extrinsic values )
self.cameraExtrinsicFront = np.array([[1.0, 0.0, 0.0, -0.073],
[0.0, 1.0, 0.0, 0.011],
[0.0, 0.0, 1.0, -0.084],
[0.0, 0.0, 0.0, 1.0]])
self.cameraExtrinsicBack = np.array([[-1.0, 0.0, 0.0, 0.2],
[0.0, -1.0, 0.0, 0.011],
[0.0, 0.0, 1.0, -0.084],
[0.0, 0.0, 0.0, 1.0]])
I also found the coordinates from the lidar, the way we made in class 10:
...
z = 0
for idx, range in enumerate(msg.ranges):
theta = msg.angle_min + idx * msg.angle_increment
x = range * math.cos(theta)
y = range * math.sin(theta)
self.points.append([x, y, z, 0])
....
When I multiply these values I get values in the order of the 2000, 2300 most of the time...
(my camera has a width of 1280 and height of 720)
I already tried thinking that it's from the middle and subtracting -1280/2 and -720/2 but I still got no results
I can also have the extrinsic matrix wrong but I don't know
Probably I'm thinking this wrong but I cannot figure it out