Lidar Sensors: Applications and Opportunities
Jae-Yong Lee is an Investment Manager at Rewired.
In March of 2004, fifteen self-driving cars raced across the Mojave desert in the inaugural Darpa Grand Challenge. Just over a decade later, autonomous vehicles are the “it” thing, thanks in no small part to Light Detection and Ranging (lidar) sensors. In fact, Velodyne, originally a speaker company, and now a leading force for lidar innovation, kickstarted lidar development because of this challenge. The lidar has since become the preferred sensing method for autonomous navigation.
The layperson rarely hears about lidars outside of their applications for autonomous vehicles, but they have been around since the early 1960s. They started with — and continue to find a wide range of applications in — geomatics, archaeology, forestry, atmospheric studies, defense, and other industries.
SLAM Technology and the Rise of lidars
SLAM (Simultaneous Localization and Mapping), by its most basic definition, is the simultaneous estimation of the state of a robot equipped with on-board sensors and the construction of a model (i.e. the map) of the environment that the sensors are perceiving.
In essence, SLAM allows industrial robots to enter and interact with unknown environments, with no access to external help (e.g. GPS, Wifi, beacons, etc). The robotics industry as a whole will be heavily reliant on advancements in SLAM technologies.
Beyond its basic navigational enablement, SLAM offers natural defense against wrong-data association (i.e. loop closure to understand the real topology of the environment) and perceptual aliasing, where similarly looking scenes, corresponding to distinct locations in the environment, would deceive place recognition. In other words, SLAM helps robots know what they don’t know, and avoid misappropriation.
Today’s SLAM systems are composed of two components. The front-end — the data collection end — abstracts sensor data into models that are amenable for estimation. The back-end — the data analyzer — then makes inferences on these abstracted data.
Over the years, there have been two main drivers of progress in SLAM: new algorithms (the back-end) and the availability of novel sensors (the front-end). In recent applications, 3D lidar sensors have taken the main stage. Solid state technology has accelerated the pace of sensor innovation.
According to analyst firm Frost & Sullivan, over 90% of all driverless cars in development have solid-state lidars. In addition to the advantages laid out below, having a solid state lidar as the primary sensor of a vehicle is a critical safety measure.
The advantages of solid state:
- Long-distance scanning up to 300m
- No moving parts, which means less inaccuracies
- Better integration into automobile design
- Increased reliability
A lidar for All Industries
While lidar has become the go-to technology for most autonomous vehicles, there are any number of potential applications for these kinds of sensors.
In geology, for example, understanding the landscape in detail is key to reducing property damage and saving lives in areas where landslides are prevalent. The USGS Landslide Hazards Program uses static lidars for landslide recognition, hazard assessment, and mitigation reports.
Singapore leverages handheld lidars to map the future of their smart city so they can understand and tackle urban problems more effectively and efficiently. Using this technology, for example, Singapore is in the process of planning and optimizing evacuation routes, an extremely challenging project in a city with a population density of 7,909 people per km². Another use of the handheld lidar is infrastructure optimization, as evidenced by the Verity-GeoSLAM collaboration for real-time construction quality management.
Lidar scanning has also transformed archaeology, producing stunning maps that have led to major discoveries in recent years. Airborne lidars have revealed new parts of an ancient Cambodian city and uncovered the “City of the Monkey God,” a legendary lost civilization buried in Honduran rainforest. Lidar scanning is not only employed for historical landmark discoveries, but also preservations. One example is the Norfolk Broads historic windpump restoration project in the UK, in which laser scanners are used to create high-resolution 3D models that help experts with their renovation activities.
The list goes on, with use-cases in precision agriculture, mining, solar energy optimization, and meteorology. lidar sensors have applications in nearly every industry where careful visualization is needed.
The Humanitarian Approach
At Rewired, we are especially interested in the humanitarian applications of machine-perception technologies. We believe that by turning our attention to the basic needs of the world, we raise the base-standard of quality of life for everyone. lidar sensors are among the machine perception technologies helping us understand and interact with the world around us in unprecedented detail. As a result, they have profound potential for solving some of our most wicked problems.
Climate change
Lidar scanning has led to advances in tropical forest mapping, helping researchers better understand how climate change and human exploits are depleting this essential resource. Airborne lidars generate highly accurate models of the ground surface. The above ground biomass (AGB) can then be calculated using regression models that link lidar metrics to biomass estimates from forest inventory plots, using predictions such as mean/maximum canopy heights, vertical canopy measures, height percentiles, variance of heights.
There are still challenges in accurately mapping of tropical forests because sensor-specific uncertainties (e.g. GPS inaccuracy), geometric inaccuracies (e.g. scanning angle), and uncertainties from the post-processing steps (e.g. wrong input/output for surface interpolation).
Tropical forests are highly complex and more difficult for sensors to capture than other environments; nevertheless lidar sensors have significantly increased our knowledge of rainforest depletion and enhanced preservation efforts.
Disaster Relief
During the Haiti earthquake of January 2010, one single pass by a business jet equipped with a US Air Force lidar flying at 10,000 feet over Port-au-Prince captured snapshots of 600 m² at a resolution of 30cm, displaying the precise height of rubble in city streets.
Today’s airborne lidar systems are much more sensitive. Individual detectors move mechanically alongside the laser to capture a wider field of view. As a result, we can more quickly determine the impact of earthquakes or tsunamis and more accurately assess damage to infrastructure.
Modern military lidars are far more accurate than they were in 2010, and they produce much larger maps, more quickly. Instead of silicon, today’s lidars are constructed using indium gallium arsenide, a semiconductor that operates in the infrared spectrum at a relatively long wavelength, which increased the power and range of airborne laser scanning.
Hydrographic Mapping.
Accurately representing water masses poses a particular challenge as the light emitted by the sensor is reflected by the water surface. As a result, other lidars, such as those made for terrain modelling, are not adequate for hydrography.
While most datasets produced by such lidars are discarded due to their overt inconsistency, hydrographic lidars’ data are typically enforced to update the the National Hydrography Dataset of the US Geological Survey (the “gold standard” of hydrographic mapping) and produce a higher-quality map. This is because hydrographic lidars take into account operating geometry, propagation-induced biases, tide effects, as well as wave heights. Hydrographic lidars also include underwater lidars, which, using laser triangulation principle, can produce accurate 3D representations of river floors and subsea maps, as well as high-precision underwater distance measurement for inspection and survey purposes.
Key policy as well as engineering recommendations pertaining to water pollution, flooding, drought, as well as climate change depend on analyses run on the National Hydrography Dataset, including but not limited to water quality modeling, water runoff modeling, and stream flow tracing. This is where the latest lidar developments can have a humanitarian impact by creating more accurate, higher-resolution representations of water masses and flow dynamics to have a better understanding of cause-consequence relations and to accordingly tackle environmental issues.
A Quantum Leap Forward
The diversity in lidar applications, as well as use case-specific advancements, shows that improvements in lidar technologies and SLAM alike have come a long way; however we’re at the precipice of unlocking far more opportunities. Where are we in the state of the technology? Which are the key limitations today that keep us from making better use of lidars? There is indeed much room for improvement, especially when using lidars to equip robots with sensory capabilities. For example:
- Robots continue to lose tracking because of visual disturbances, dynamic obstacles, and sensor fusion issues.
- There is currently no way for the machine to be failure-aware.
- The sheer amount of data captured and its positioning are pushing the limits of mobile computational resources.
The challenge before us is to develop mapping that is not only failure-safe but also failure-aware. High-quality SLAM, as employed in military operations and scientific research, still needs human supervision and verification to ensure against computer-generated error. Classification errors made by automatic procedures, for example, still need to be corrected manually.
Right now, the most common approach is to address sensor fusion using lidar, gyroscopes, odometers, inertial sensors, etc. The logic here is that more input generates more data, which requires humans to exert more effort verifying this data to track failures. This is costly in terms of human labor, computational power, as well as electronic thermal management.
One potential solution is to leverage a deep neural network to regress the inter-frame pose between two captures acquired from a moving robot directly from the original capture pair. This can replace the standard geometry of visual odometry. Likewise, with random forest and deep convolutional neural network, the estimation of the depth of a scene from one single input is more accurate and more efficiently produced. Recent developments that support this direction are the latest Nervana Neural Network Processor family by Intel or the recent move by Nvidia to release as open source a way to design deep learning inference accelerators.
The same is true for airborne and underwater lidar needs. Algorithms which allow us to integrate geometric information provided by the lidar sensors with additional information from active and passive sensors in an efficient way will be crucial to unleash the untapped potential of existing data.
There are any number of paths forward. What’s clear, however, is that the back-end is increasingly becoming the main force multiplier. As sensors continue to develop at a quickening pace, we must pair them with a better brain. Smart lidars will unlock an entirely new world of possibility and application areas.
References used for this post:
Barthelmie, R. J., Doubrawa, P., Wang, H., & Pryor, S. C. (2016). Defining Wake Characteristics from Scanning and Vertical Full-Scale Lidar Measurements. Journal of Physics: Conference Series, 753.
Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Leonard, J. J. (2016). Past, Present, and Future of Simultaneous Localization and Mapping: Towards the Robust-Perception Age. IEEE Transactions on Robotics, 32(6), 1309–1328.
Leitold, V., Keller, M., Morton, D. C., Cook, B. D., & Shimabukuro, Y. E. (2015). Airborne Lidar-Based Estimates of Tropical Forest Structure in Complex Terrain: Opportunities and Trade-Offs for REDD+. Carbon Balance and Management, 10(3).
Mandlburger, G., Briese, C., & Pfeifer, N. (2007). Progress in LiDAR Sensor Technology — Chance and Challenge for DTM Generation and Data Administration. Proceedings of the 51th Photogrammetric Week, 3–7 September 2007, (pp. 159–169). Stuttgart.
Sujiwo, A., Ando, T., Takeuchi, E., Ninomiya, Y., & Edahiro, M. (2016). Monocular Vision-Based Localization Using ORB-SLAM with LIDAR-Aided Mapping in Real-World Robot Challenge. Journal of Robotics and Mechatronics, 28(4), 479–490.