Developers are hard at work on the machine learning necessary for safer and more-autonomous vehicles. But all the AI in the world won’t be enough if the car relies on inadequate sensors. That was clearly demonstrated in one fatal Tesla crash that occurred in part because the car’s camera didn’t correctly identify an oncoming truck. To ensure smart vehicles have a reliable model of surrounding objects — particularly the ones the cars identify as “threats” — most rely on one or more lidars, or laser-based remote sensors. Up until now, that’s been a sticking point, as the classic “spinny” Velodyne lidars you see in photos of most autonomous test vehicles cost upwards of $70,000 a pop. Obviously, that puts them way out of the reach of retail vehicles.
The good news is there’s something of a gold rush with companies working to innovate and eventually disrupt the lidar market. We were able to speak with a number of them at CES, and get demos of some of the most promising prototypes. It’s too early to say which will win out, but it’s certainly worth looking at and evaluating their approaches.
Common sensors for vehicles include cameras, radar, ultrasonic, and lidar. Cameras provide the most human-usable data, but are poor at estimating the distance of objects, and are limited to working with good lighting. Nvidia has made cameras a core piece of their autonomous vehicle research, and Mobileye (now part of Intel) is selling camera-centric systems to many auto companies. Radar has tremendous range, and can see through various kinds of weather, but has limited resolution for identifying objects. It’s been in the news this year as Tesla pinned some of the blame for the aforementioned fatal crash on Mobileye’s camera tech and loudly shifted to a radar-centric approach.
Lidar, however, is the cornerstone of most of the top autonomous vehicle systems, including Google’s Waymo and Uber’s efforts. Aptiv’s impressive demo car has 9 lidars. High-end models can provide excellent distance information in all directions at good resolution, but not only do they cost $70,000 each, they require a large piece of hardware on the roof of the vehicle. So reducing the size and cost of lidar has been one of the most obvious requirements for better ADAS (advanced driver assist systems) and autonomous driving.
MEMS Mirrors and Semiconductor Lidar
Current lidar systems involve a number of parallel lasers (ranging from 16 to 128) arranged vertically, each with their own detector. By spinning a mirror, all of them generate a 360-degree monochrome distance map. The lasers need to be carefully aligned with the detectors. Companies like Infineon are counting on using MEMS technology (micro-electro-mechanical systems) to move the mirrors — simplifying the architecture and dramatically lowering the cost.
In a bigger step, researchers have realized it’s possible to get similar results by using a semiconductor similar to a typical camera sensor, but that instead uses lasers to get distance information recorded on a grid of pixels. This offers both a lower cost and easier integration into windshields or auto pillars. The biggest downside is limited field of view — typically around 120 degrees. That means a system for self driving would need several of these lidars and must then integrate the output of all of them.
Initially, at least, semiconductor lidar will also have less range than the larger, spinning, models. For full coverage either several spinning units mounted at the corners of a car (PHOTO) or one large spinning unit and several semiconductor units to cover its blind spots would be needed. Because of the high price and large size of lidar currently, many auto designs are using cameras or other less-expensive sensors to cover the areas the rooftop lidar can’t see.
For Most People Lidar Means Velodyne
Ever since speaker manufacturer Velodyne helped out with a lidar design for the DARPA challenge, Velodyne’s name has become practically synonymous with lidar. When you see a large spinning device on the roof of a car, it’s almost certainly an expensive Velodyne unit. Most of the units in the field current feature 64 channels (the number of lasers, aligned in a vertical column), although the latest models can have 128 channels or come in smaller versions with 32 channels. It seems likely that flagship research and mapping vehicles will always benefit from the maximum possible resolution, but most of the prototype vehicles being touted by car companies are styling several smaller units.
Velodyne doesn’t have the automotive lidar field to itself anymore, by any means. I lost count of the number of companies selling lidar units at CES after a dozen or so. Not all of them make the entire unit. Many, like Leddertech, specialize in integrating their signal processing with sensors from other companies like OSRAM. But a few of the most innovative startups stood out for moving up the “stack” to begin to include multiple sensors and sensor fusion in their lidar devices.
Sensor Fusion Is the Next Step: AEye and Tetravue
While moving to semiconductor-based solutions will greatly reduce the current cost of lidar units, there is still plenty of room for further innovation and integration. Since lidar is only one of the many inputs needed for an autonomous vehicle, it’s natural to optimize the fusion of multiple different sensors into a coherent data model. Right now that fusion is done in a power-hungry GPU like one of Nvidia’s Drive mini-supercomputers.
Startup AEye is planning to integrate both lidar and a traditional camera onto its sensor, while also adding enough intelligence that it can optimize the laser pattern it emits based on feedback from both the lidar and the camera. It expects the result to be as much as five times more efficient than a typical MEMS-based lidar solution, and provide a full RGB+depth image. Attempting to claim new ground, it’s calling its new approach iDAR, and says that overall it expects it to be 10 to 20 times more effective than traditional lidar when identifying objects. It expects to be providing initial units to customers this year.
Tetravue aims to achieve something similar, but in a different way. It’s working on a system where a traditional camera sensor can be fitted with a light slicer so that in addition to RGB data, the unit will also get accurate depth information. It’s hoping to get beta units out later this year. Backed by Samsung, Foxconn, and Bosch, there is certainly reason to see if this radically different approach can pay off.
Osram and EPC: Selling Shovels to the Miners
In any gold rush, the company most likely to make money is the one selling tools to the miners. In this case, digging under the array of lidar vendors, there are a couple companies able to play the field. Osram, a semiconductor maker, has its silicon embedded in many different lidar designs, including Velodyne’s. At an even more fundamental level, Efficient Power Conversion (EPC) provides the high-speed Gallium nitride (GaN) semiconductors necessary to rapidly fire the lasers in lidar. When I asked CEO Alex Lidow which lidar companies EPC supplied, he simply said, “Basically, all of them.”
Whoever the ultimate winners are, the benefits to car companies and car buyers are clear. We’re going to have access to much lower-cost, and more flexible, solutions for sensors in both autonomous vehicle and driver assistance systems, thanks to a world wide flurry of innovation in lidar.
(Top image courtesy of Velodyne)