Industrial and Commodity Data Agreeing on X,Y,Z

Pixel8Earth
5 min readDec 11, 2019

--

Introduction

When we started the Pixel8 Boulder experiment we wanted to create an open point cloud for ourselves and the community to play with. Thanks to some help from our friends we now have a robust point cloud for Boulder, and we’ve had some time to start iterating with it. While the photogrammetry process of turning photos into point clouds is the eye catching part of the work — the true core of Pixel8 is the conflation of multiple sensors through co-registration. In short turning relative point clouds in Cartesian space into real world models with a spatial coordinate system, (longitude, latitude, altitude).

Cartesian vs. Spatial Coordinate Systems

The market reason we’ve focused on this problem is the emerging opportunities in augmented reality (AR) and autonomy are likely going to need a different data collection paradigm. Traditionally high precision data collection for variety of mapping use cases has depended on expensive industrial sensors. Often these are collected from aerial and satellite platforms, and more recently drone and sensor enhanced cars.

Industrial Data Collection

While these industrial sensors are modern marvels of technology; operating/deploying them is an expensive proposition. Each has pros and cons in regards to accuracy, resolution, perspective, frequency and relative build/run cost. The challenge is that the global 3D maps for emerging markets will need high frequency and high accuracy updates. Using expensive industrial sensors will be hard scale economically to meet the new frequency demands.

The good news is that commodity photo and video sensors are improving in quality and scale. The challenge is the geospatial accuracy of these commodity sensors has been GPS grade — 3–5m best case — which often doesn’t hack it for the new AR and autonomy markets.

Commodity Data Collection

Ideally we could have the accuracy of industrial collection with the cheap ubiquity of commodity collection. This is the challenge that data conflation through point cloud co-registration solves. If we can take commodity photo/videos and use photogrammetry to generate point clouds, which can then be co-registered with a high accuracy to industrial collections we get the best of both worlds. A hybrid multi-source map would have the high accuracy of an industrial collection and could be frequently updated with co-registered commodity data.

Adding a New Industrial Baseline to Existing Commodity Point Clouds

Let’s take a look at a few examples in practice. The Boulder experiment used the excellent City of Boulder aerial LiDAR to co-register our photo derived point clouds. This worked great, but what if the your city doesn’t have free LiDAR or the free LiDAR isn’t accurate enough for your use case. Then aerial imagery could be a wonderful upgrade or alternative. Nearmap was kind enough to share an aerial derived point cloud for a couple of blocks of our Boulder experiment. We used the Pixel8 georectification pipeline to co-register the Nearmap data with both the City of Boulder LiDAR and the Pixel8 point clouds. Lets check out the results below:

Aerial is awesome for filling in all the gaps in terrestrial photo collects
Terrestrial photos provide great ground detail for areas in shadow or occluded in aerial imagery

The open question is — will the co-registration of commodity data to industrial baselines be accurate enough for emerging 3D mapping use cases? In this Nearmap experiment the commodity photo derived point clouds achieved 25cm RMSE to the Nearmap baseline. A solid first result — good enough for many use cases but not all.

Adding New Commodity Data to Existing Industrial Baselines

While we quite like our Pixel8 mobile app and our friends that turned up to help us map. It is a small network when your goal is to map the world. Fortunately there are lots of existing networks for photo and imagery collection. The good news is we’ve added the ability to the Pixel8 platform to upload any outdoor photos, videos or point clouds and generate georectified data. You can get a sense for the workflow is this engineering diagram from Pramukta.

Pixel8 Platform Workflow

To illustrate how this workflow operates we can take a fun example. Recently we saw the release of the super cool Display.land from Ubiquity6. We couldn’t pass up giving it a test run to see what was possible with their app.

Display.land reality capture

Lets see if we can co-register the data with our ground control surface. Since we just did the work with Nearmap we thought it would be fun to show Ubiquity6 + Nearmap for a demo. Good news is it worked like a champ!

Co-registration of Ubiquity6 and Nearmap Aerial

A couple of caveats about the Ubiquity6 data. Anything you create you can download as gLTF, PLY and OBJ formats, which is awesome. For us the downside is these are all mesh formats and we do alignment with point clouds. To get around this we sampled the meshes to generate point clouds. It is sub-optimal but works. This is why you see some warping in the Display.land clouds generated by the indigenous meshing process. Generally point clouds are a precursor to meshes, and hopefully they’ll be exposed in future releases by Ubiquity 6.

We are excited to do more experiments with more sources of photo/video/point-cloud co-registration. If you have one you’d like to test let us know. An additional area we are particularly keen to explore is leveraging the new STAC (Spatio Temporal Asset Catalog) standard to discover data available for a location, which could provide inputs to generate conflated 3D maps and feature databases. Hopefully we can contribute to the emerging STAC point cloud extension and show some cool examples leveraging it in the near future. In the meantime if you’d like to play with the data highlighted in this post we’ve stood up a small demo site with the Nearmap, Ubiquity6, Pixel8 mobile and City of Boulder LiDAR all co-registered. Click the “P8” icon for layer controls and “shift” touch pad for 3D pivoting — iPhone’s webkit has issues with the data volumes, apologies.

--

--

Pixel8Earth

We are building a multi-source 3D map of the globe one image at a time.