The Department of Geosciences at Mansfield University recently approved the purchase of Pix4Dmapper, a software that renders 3D models photogrammetrically through a proprietary algorithm. What this means is that Pix4D can assemble an array of aerial images from a UAV to create a three-dimensional representation of the landscape photographed. This option is significantly cheaper and more accessible than using LiDAR or other 3D remote sensing technologies, and according to the DJI specs, should be nearly as accurate. And happily, Pix4D also has a dedicated smartphone app that fully integrates with our DJI Phantom 2 Vision+ UAV, providing coverage assistance to the pilot, or offering fully automatic piloting specifically for mapping purposes.
To do a preliminary test of the new software, MU Geosciences students Wes Glowitz, Nate Harpster and Stephen Prinzo used the Phantom to photograph Belknap Hall on the Mansfield campus, home to the department.
Wes, Nate and Stephen fly the Phantom to acquire images of Belknap
Diagram of the UAV’s location during each shot.
We knew that this coverage would ultimately insufficient — the application recommends around 1,000 photos for this size of study area at full resolution capabilities — but for a test, 10% coverage would provide a basic illustration of how the software works.
Then, using Pix4Dmapper, we processed the imagery into a 3D environment. We used the free download version (which does not allow outputs to other formats) because we’re still waiting for the university to finalize the purchase of the professional license.
The software is pretty resource-intensive. The i5 quad-core machines in the Belknap GIS lab really weren’t successful at completing the processing. I ultimately completed the processing at home on my wife’s souped-up graphics machine (she’s a photographer). The rendering took around four hours and producing a point cloud, an apparently interpolated “densified” point cloud, and a triangulated mesh to represent the area. Below are the images from the rendering process.
Point cloud looking at Belknap from the southwest.
Point cloud looking at Belknap from the southeast. In each of these photos, the shape of the building is coming into focus.
Looking at Belknap from the south. Definitely see a difference in fill between 10 and 26% of rendering.
Again from the south. Another big difference.
Looking from the east — now, we’re seeing more of the surrounding area come into focus. Interestingly, the yellow stripe on Sullivan Street (US Highway 6) seems to be easier for the software to render.
72% Rendered (showing area of point cloud coverage)
Not a lot changed by 72%, so I decided to show the coverage of the model’s point cloud here. This is looking from the east again, and Belknap is that very bright spot toward the right. Even though the camera’s shots were all focused strictly on Belknap, there are a lot of points showing up elsewhere on campus and in the surrounding neighborhoods.
Looking at Belknap from the south again. The maple tree on the building’s southwest corner is definitely playing tricks with the software’s ability to grab the geometry.
From the point cloud, the software interpolated a densified version of the point cloud, as well as a triangulated mesh. The densified point cloud views have a distinctively pointillistic feel to them.
Densified Point Cloud
Looking from the east. Sullivan Street is clearly visible on the left side of Belknap. More interesting, though, is the amount of modeling the software did for surrounding areas. No, those mountain point clouds in the distance wouldn’t be capable of producing a detailed 3D model, but the fact that they show up at all is somewhat impressive.
Looking down from the north. More detail of neighboring Retan Center (left of Belknap), plus a bit of Grant Science Center (just up from Belknap) and Butler Hall (way up the hill).
Looking from the northeast. You can see the iconic North Hall Library (the old main building) way in the distance behind Belknap. Notice the smoke coming from the steam plant’s smokestack?
Another way the software interpolated the data was by constructing a triangulated mesh. Using the points rendered in the point cloud, the software attempted to create a set of interconnected vectors, over which it laid a composite of the imagery captures. This is similar to the method that Google Earth has used to represent 3D buildings since 2013, and landforms since its inception.
Looking from the east. Oddly, some of the cars seem to be melting into the parking lots…
The resolution of this mesh could obviously use some work; but remember, we used only 10% of the recommended number of images.
Of course, the model doesn’t really show it’s “3D-ness” with still imagery. This past Saturday, I took a rendered flyby video of the model and took it to iMovie to add some titles and music. The 720p resolution on the video (found in the gear button) gives you the full effect.
Now that we’ve got a successful test run out of the way, we’re going to tackle a few more things, including a fully modeled version of Belknap and a test of mapping a local cemetery. If the application works in these environments, we have another really fun piece of mapping technology at our disposal.