As the team has been building out the Pixel8 platform we’ve leaned pretty heavily on the magic of commodity cameras to create awesome photogrammetry quickly. As much as we love GoPro and Insta360 cameras we are remote sensing practitioners at heart. We couldn’t sit back for too long and not expand our pipeline to handle data from drones, airplanes and satellites. If we truly want a perpetually updating 3D map of the globe the bird’s eye perspective has to be part of the equation. The rest of this post will dive into the details on why we want to be handle aerial and terrestrial photos in one pipeline. Then we will go into detail on the aerial pipeline and use cases for hobbyist drone and survey grade aerial imagery.
One Pipeline to Bind Them
Bad LOTR metaphors aside — the big benefit to having both terrestrial and aerial imagery running through the same pipeline is we get much better point correspondence between the two perspectives. This is especially true when we are using a common reference system for the data. We avoid introducing additional errors in the co-registration process because two different SfM methods were run on the data. We’ve found in the past these type of SfM differences can introduce artifacts that 1) decrease the accuracy of the co-registration and 2) often require additional code based fixes to prevent causing systemic errors. By using a single pipeline we can avoid these issues. In addition there is so much amazing open aerial photogrammetry emerging we’d be remiss not to try and help stitch it all together.
The Drone Use Case
When the team started building the aerial photo pipeline there were two core use cases we are looking to support. The first is drone imagery where the vehicle only has a commodity camera with GPS. This covers most hobbyist and low cost drone scenarios. One of the persistent challenges generating drone based photogrammetry is orthorectifying the results. The most common approach is to create a set of ground control points (GCP), which works well but is time consuming and requires a bit of surveying skill.
As an alternative the our aerial photo pipeline allows you to leverage our ground control surface to orthorectify drone imagery. Thus skipping the whole GCP set up and management process. Just upload your drone images with EXIF+GPS and you are off to the races. For an example we used a set of 45 photos from DroneMapper of the Red Rocks amphitheater outside of Golden, CO. You can see the resulting point cloud in the two images below.
The next step will be a field trip out to Red Rocks to collect some terrestrial video footage and combine the two together! Just have to wait for the snow to pass :-) While natural terrain from a distance always looks cool a better photogrammetry test is a building close up with lots of glass windows. Here is an office building in Germany from Pix4D’s sample data catalog.
Here is another quick example of another office building sample with closer detail.
The upshot is if you are game to make your data open you can use a hosted public pipeline with a simple user interface and it won’t cost you anything to run. Also, you can cloud host the results to be shared with others and collaboratively stitch them together with a growing set of open public models.
The Aerial Survey Use Case
The second aerial use case is connecting the pipeline to oblique survey grade images from aerial providers like Nearmap, EagleView, Blusky or Vexcel. In these scenarios the images have survey grade metadata that allow us to use the resulting point clouds as a registration reference. While there is public open aerial LiDAR for many places it isn’t available everywhere we want to map. For these scenarios we can use off the shelf oblique imagery to build our own point cloud reference data. Fortunately, pre-collected oblique imagery is both high quality and reasonably priced. Below we can see an example of a sparse reference point cloud generated for University of Colorado’s campus in Boulder.
In addition to acting as a wonderful reference the aerial point clouds can be used to generate photo realistic point clouds and meshes. This is certainly nothing new in and of itself, but when we can use the same realistic data as a reference to co-register your terrestrial models it gets interesting. Most often digital twins have to choose between cinematic aerial views or detailed terrestrial views that look like Hollywood western sets from above. Now we can bring you the best of both worlds — gritty terrestrial detail and all encompassing aerial perspectives. We need to carve out some time to combine the CU Boulder terrestrial and aerial data and secure all the appropriate permission, but hope to do that soon. Till then we’ll be working on opening the platform to more people and if you are interested definitely add you name here.