Beyond Measurement: Extracting Vegetation Height from High Resolution Imagery with Deep Learning
release_pg6so7ogmzbpha5hwh76gpxkbi
by
David Radke, Daniel Radke, John Radke
Abstract
Measuring and monitoring the height of vegetation provides important insights into forest age and habitat quality. These are essential for the accuracy of applications that are highly reliant on up-to-date and accurate vegetation data. Current vegetation sensing practices involve ground survey, photogrammetry, synthetic aperture radar (SAR), and airborne light detection and ranging sensors (LiDAR). While these methods provide high resolution and accuracy, their hardware and collection effort prohibits highly recurrent and widespread collection. In response to the limitations of current methods, we designed Y-NET, a novel deep learning model to generate high resolution models of vegetation from highly recurrent multispectral aerial imagery and elevation data. Y-NET's architecture uses convolutional layers to learn correlations between different input features and vegetation height, generating an accurate vegetation surface model (VSM) at 1×1 m resolution. We evaluated Y-NET on 235 km2 of the East San Francisco Bay Area and find that Y-NET achieves low error from LiDAR when tested on new locations. Y-NET also achieves an R2 of 0.83 and can effectively model complex vegetation through side-by-side visual comparisons. Furthermore, we show that Y-NET is able to identify instances of vegetation growth and mitigation by comparing aerial imagery and LiDAR collected at different times.
In application/xml+jats
format
Archived Files and Locations
application/pdf 12.1 MB
file_hrfubqt2kjcflptodngjbgyl4i
|
res.mdpi.com (publisher) web.archive.org (webarchive) |
Web Captures
https://www.mdpi.com/2072-4292/12/22/3797/htm
2020-12-10 18:19:07 | 58 resources webcapture_i6bm4wktp5ewtb6eu7acrugjli
|
web.archive.org (webarchive) |
Open Access Publication
In DOAJ
In ISSN ROAD
In Keepers Registry
ISSN-L:
2072-4292
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
SHERPA/RoMEO (journal policies)
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar