Kyocera Corporation announced a portfolio of innovations in development that promise to raise ADAS technology to new levels in the long-term quest toward fully autonomous vehicles. The central ADAS challenge lies in replicating a human driver’s ability to interpret visual data. The human eye is a miracle of bioengineering — and brain not only combines the two different signals from left and right eyes, but also interprets the result faster and more accurately than all but the most powerful computers. ADAS engineers strive to replicate this through digital sensing and processing. Cameras and LiDAR: Because cameras and LiDAR each offer unique benefits, they are often employed in combination. Cameras are ideal for detecting the color and shape of an object; LiDAR excels in measuring distance and creating highly accurate 3-dimensional images. However, digital imaging from two units that don’t share the same optical axis exhibits a deviation error known as parallax. A computer can theoretically integrate two data channels to correct parallax error, but the resulting time lag creates an obstacle to any application requiring highly accurate, real-time information, such as driving.