One of the tools Pixel8 leverages to help georectify 3D point clouds from video/photos is open aerial LiDAR data like USGS’s 3DEP program. Aerial LiDAR is a fabulous open data reference, but for terrestrial applications like augmented reality and autonomy there are some glaring challenges. In this post we’ll talk about LiDAR and other aerial 3D references shortcomings and how we can use them as a springboard to creating a perpetually updating 3D reference point cloud by fusing multiple data sources together.
Recently we built a sandbox for Pixel8 where customers could test out the platform by uploading a few demo videos.
In addition to showing off the typical georectification using an aerial point cloud we wanted to illustrate how the Pixel8 reference is updated as video/photo derived point clouds are added to the platform. In the image below we can see how the open 3DEP LiDAR (in green) has been augmented with previous Pixel8 collects (natural color)to create a new reference for building an even more accurately aligned 3D models (in pink).
Using aerial/satellite data for a reference is great if your target is aligning aerial imagery. Trying to align terrestrial imagery to an aerial reference is one of the tougher computer vision problems we’ve tackled. To highlight this check out the level of detail for just the 3DEP LiDAR.
While it is a good springboard when you have no other choice. We don’t want to stay there. Compare the LiDAR only reference to it being augmented with Pixel8 crowdsourced point clouds.
A big challenge of georectifying imagery, especially terrestrial, has previously been these binary choices often driven by data availability and cost.
By fusing both aerial and terrestrial views together, and having the ability to perpetually update them with commodity data — we bring together the best of both worlds. While the 3D models are cool we think a globally updating multi-perspective 3D reference is where the long term value will accrue.