From 2.5D to (roughly) 2.9D

Until just a few years ago, aerial lidar was collected from relatively high altitudes using conventional aircraft. The flying-height-to-object-height ratio was pretty small, and the resulting data looked less three-dimensional than it did, as many in this field call it, 2.5-dimensional. This term is equal parts irreverent and accurate: aerial lidar did not capture well the sides of buildings, the undersides of bridges or tree canopies, and so on. Aerial lidar is more akin to a two-dimensional blanket of light being draped over a lumpy world, and the result was, well, not truly three-dimensional. Even multi-return and full waveform systems, while providing a much richer view of vegetated areas, are still providing essentially a top-down view, and are not fully 3D.

Unoccupied aerial systems changes the nature of the lidar point cloud. that flying-height-to-target-height ratio is much smaller with UAS, so laser beams are reflecting back from surfaces at steeper angles. Sides of buildings and cliffs are being sensed better, and it’s becoming more feasible to see around and under obstructions. The two-dimensional blanket is beginning to feel more like, say, shrink wrap. This is still not truly three-dimensional, at least not in my mind. I’m calling it 2.9D.

There are myriad implications for this new way of illuminating objects from the low altitudes possible with UAS, and most of them are flashy and exciting. But the fundamental way we filter and process our point clouds shouldn’t get lost in the shuffle.

Scan angles != incidence angles
It is common practice to filter out from UAS point clouds those returns that have a high scan angle. From a 2.5D standpoint, this make sense: a laser return from a steep angle will be less accurate. (This isn’t a problem in conventional aerial lidar because the angular field of view is already limited from the start.) But we have to remember that scan angle is the measure of the angle of the laser pulse as it was sent from the scanner, and that measure is in the scanner’s coordinate system. Unless the scanner is parallel to perfectly flat ground, scan angle does not equal the incidence angle. Incidence angle is the difference between the surface normal of the target and the laser pulse. This measure is much more pertinent to how reliable that laser return is. Naively filtering returns with high scan angles will surely get rid of many undesirable returns, but it could mean losing valuable information, particularly on vegetation, steep topography, and buildings in the scene.

Points per square meter” doesn’t make sense
Another way we talk about point clouds is to speak of its density in terms of points per square meter. When it comes to UAS lidar, unless we’re scanning a barren landscape, this number can be somewhat misleading. And though the end user might not care about the difference between surface density and volume density, the point cloud processor ought to care very much.

One of the steps many of us take when processing UAS lidar point clouds is to thin the cloud to make it more reasonable to analyze or deliver to another researcher. After all, most UAS laser scanners are spitting out hundreds of thousands of pulses per second, and depending on who you ask, that’s way too much for most applications. Decisions about thinning an aerial point cloud could be made in two dimensions, but UAS point clouds should be made in three dimensions. There are tools in ArcGIS and LASTools, and surely other softwares, that allow the processor to thin more thoughtfully. But at the very least, one should make a decision about a desired volume–points per cubic meter–before thinning.

Leave a Reply

Your email address will not be published.