How do 360-degree cameras create images and correct distortion?

In this blog post, we will examine how 360-degree cameras generate images of the vehicle’s surroundings and correct distortion to help drivers stay safe.

 

There are various devices that assist drivers when parking or driving through narrow roads. Among these, one particularly noteworthy device uses videos captured by cameras mounted on the front, rear, and sides of the vehicle to create a 360° view of the vehicle’s surroundings, as if looking down from above, and displays this on a monitor inside the vehicle. This device helps drivers quickly grasp the surrounding situation, thereby assisting safe driving and parking.
Now, let’s take a closer look at the process of how this video is provided to the driver. First, a grid pattern is laid out on the ground around the vehicle, and cameras are used to capture the scene. The cameras used in this system typically feature wide-angle lenses, which provide a broad field of view, thereby reducing blind spots and improving the driver’s visibility.
However, wide-angle lenses have a inherent curvature that causes distortion in the video as light passes through the lens. The center of the video appears convex, and the distortion becomes more pronounced as the distance from the center increases. This phenomenon is called lens distortion. The characteristics of the camera itself that affect this distortion are referred to as internal variables and are represented by distortion coefficients.
If internal variables are accurately identified, a distortion model can be set up to correct distortion. The process of correcting distortion is highly complex. To ensure that the video seen by the driver matches the actual situation as closely as possible, distortion must be minimized in the video captured by the camera. This is achieved using a distortion correction algorithm, which takes into account the characteristics of the lens as well as the position and angle of the camera mounted on the vehicle.
The causes of distortion caused by the tilt of the camera mounted on the vehicle are referred to as external variables. By comparing the captured video with a real-world grid, you can determine the angle at which the grid has rotated or the changes in the grid’s position in the video, thereby identifying the camera’s tilted angle. Based on this information, external variables can be adjusted to correct distortion.
Once distortion correction is complete, the next step is to estimate the corresponding 3D real-world points for the points in the video and perform a viewpoint transformation to obtain a video with perspective effects removed. Generally, when a camera projects a 3D real-world scene onto a 2D video, objects of the same size appear smaller as they move farther away from the camera.
However, in an image viewed from above, there should be no change in the size of objects due to distance, so it is important to remove this perspective effect. If we know the positions of several points in the image obtained through perspective transformation and the corresponding points on the real-world grid, we can describe the correspondence between all points in the image and the points on the grid using a virtual coordinate system.
By utilizing this correspondence, if the points in the image are placed on a plane such that the shape of the grid and the relative sizes of the grids are maintained in the same way as in the real world, the image appears as a two-dimensional image. This resulting image is the image viewed from above. By synthesizing images from each direction in this manner, the driver can view a 360° image on the monitor as if looking down from above the vehicle.
The technology used in this process is highly complex and precise, but the results provide significant benefits to drivers. Especially in narrow parking spaces or complex road conditions, such devices play a crucial role in ensuring driver safety. The advancement of this technology is greatly enhancing the safety and convenience of vehicle driving and will serve as a foundational technology for the development of future autonomous vehicles.

 

About the author

Writer

I'm a "Cat Detective" I help reunite lost cats with their families.
I recharge over a cup of café latte, enjoy walking and traveling, and expand my thoughts through writing. By observing the world closely and following my intellectual curiosity as a blog writer, I hope my words can offer help and comfort to others.