A few days back, I sent up our new DJI Phantom 2 Vision+ above the Mansfield football stadium for some imagery testing, and published a blog on doing lens corrections for mapping. Now, I’m going to take those corrected images to the next step and test their resolution and coverage to see how effective this equipment is for providing high-quality, high-resolution and low-cost remote sensing imagery for mapping applications. In a future post, I’ll be looking at how much distortion remained in the imagery after the lens correction process.
Resolution of Imagery
Checking resolution before the lens correction would not have worked, simply because the inherent distortion from the lens meant that the geometries of the image were not consistent nor representative. In addition, any resolution or coverage determined on the raw imagery would be altered by the correction process. After adding the correction filter the other day, these images are ready to check for other attributes and characteristics like resolution.
- Imagery was taken at 25-foot intervals above the surface, but… the altitude of the UAV is not 100% consistent even when in GPS hovering mode, and there is almost certainly some error from the sensors. Unfortunately, there’s not a lot I can do with this because of the equipment’s limitations, so take the altitude measurements as approximations.
- The lines on the football field provide excellent geometry, but… the lines are approximately three inches wide and therefore aren’t as precise as I might like. I will try to account for this by using the middle of the line as the basis for measurements.
- The images, despite correction, do have some distortion, particularly at the edges. To adjust for this, I will try to limit measurements to the centers of the images, which should be the least-changed pixels through the lens correction process.
To test the resolution, I’ve loaded each of the corrected images into Adobe Photoshop CC and have used the tools to straighten the images, if necessary, so that the lines are as parallel as possible to the raster’s pixel grid. From there, I’ve measured the number of pixels that represent the 10 yard (30 feet, 9.144 meter) distance between the two 45-yard-lines at the middle of the field to and calculated the spatial resolution that the image represents — dividing the number of pixels by distance to determine pixels per distance, then inverting that number to come up with how much distance is represented per pixel (Here’s the Google Spreadsheet if you want more info). I’ve also included an illustration of the resolution’s impact, using the smaller letter “M” in the Mansfield logo sized to a common dimension (300×300 pixels) from each altitude.
Quick Resolution Observations
Resolution obviously decreased at each altitude interval, but were quite high even at 400 feet (121.92m) above the surface, which is the maximum height for citizen UAV flight per the FAA. A resolution of 2.045in/px (5.195cm/px) at 400 feet of altitude is significantly higher than other low-cost or free aerial imagery available through the web today. The resolution at the lower altitudes, especially 25 feet (7.62m) and 50 feet (15.24m) are exceptionally high; 0.108in/px (0.275cm) at 25 feet provides the image viewer a pretty reasonable chance of distinguish between different blades of Astroturf on the football field.
Coverage of Imagery
However, despite the fine resolution at the lower altitudes, there are still practical limitations to the ability to map at that resolution: the number of images needed to cover a significant area, and the amount of labor necessary to process and stitch that imagery. To determine potential feasibility of the different altitudes — and hence, different resolutions — for mapping, coverages must be determined at each altitude using the lens corrected imagery.
Using basic photogrammetric techniques and a little bit of math, this is actually easy to calculate. The Vision+ camera is rated at 14 megapixels, and each image outputs at at 4384 x 2466 pixels. From the chart above, we know the number of pixels that represents 10 yards (30 feet, or 9.144m). We can then use the following equation to determine coverage for each dimension of the image:
“Pixels Counted” = Number of Pixels in 10 yards
“Distance Measured” = 10 yards
“Pixel Extent” = Measurement of the image’s width or height
“X” = Ground length of the image’s width or height
So, what are the results? I think this time, it’d be easiest to just post the spreadsheet. Scroll right to see the full set of area measurements, including square feet, square meters, acres and hectares.
That’s all well and good, but let’s put those numbers into practical terms as well. Mansfield has a 174-acre campus, and let’s say we’re going to use the UAV to map the entire campus at a very high resolution (which *just might* be a forthcoming student project)… how many images would we need at each altitude to pull this off? Note that this is just a bare minimum number to account for the area of campus, not the variations in shape that might require additional coverage.