Author: Mark Bowersox
I've been reading a summary paper "Crowdsourced Geospatial Data" that I found on the Defense Technical Information Center website. The paper discusses a variety of crowdsource projects and common methods for quality assessment of the collected data. There are many great takeaways in the paper, but it drove me to think of how robust imagery analytics might be applied to improve the collection of this volunteered geographic information.
The authors point out that people participating in these projects are not always experts. They do have a passion for contributing, whether it be adding roads and other primary features in Open Street Map, or using Tomnod to delineate storm damage areas or locate shelters used by internally displaced persons (IDPs). In any volunteer effort, it is essential to keep the passion for participation high while maximizing the quality of the results. For crowdsourcing tasks that rely on information extraction from imagery, robust analytics can assist non-expert volunteers in a couple of ways.
First, in the case of a natural disaster, the task is often to delineate the boundary of the event. You might think that damage areas are easy to spot in imagery, and often, they are. However, when asking a person to spend hours or days doing this work, we should make it easy (keep passion high!). Automated change detection techniques applied to pre- and post-disaster imagery can highlight the damage areas. These techniques can provide overlays on the imagery that make it easier to locate and subsequently trace damage areas. And for neighborhood scale objects like buildings, robust change detection may find damaged objects that a volunteer could miss.
Second, the search for shelters or other indicators of displaced persons may require viewing thousands of satellite images. This is time consuming and taxes the volunteer's eyes. There is a category of robust remote sensing analytics referred to as broad area search methods. These methods utilize the spectral content common in today's commercial satellite imagery to narrow in on objects of interest. Algorithms for anomaly detection, spectral complexity, and material identification are examples. Running these analytics on satellite imagery results in a map of pixels of where to look first. Again, this is an overlay that directs the volunteer to the parts of the image most likely to contain shelters (or other objects of interest) and often uncovers things that are invisible to strained eyes.
Categories: Imagery Speaks
Tags: tsunami, Anomaly Detection, Change Detection, disaster response, data analysis, geospatial data, multispectral, crowdsource
Author: Thomas Harris
Every December I look forward to the AGU Fall Meeting in San Francisco. It's always an amazing time, visiting the great city of San Francisco with more than 22,000 like-minded geo-science geeks.
This year, I'll be attending the 2013 AGU Fall Meeting with David Hulslander. Both Dave and myself will be presenting some of our recent work at Exelis and seeking opportunities to interact with all the geo-scientists that use IDL and ENVI in their work.
If you're going to AGU, please stop by one of our posters or talks, or, send us a note on Twitter to schedule a meet-up.
On Monday (8:00am-12:20pm |ED11B-0745), December 9th, we'll be presenting 'Academic and Non-Profit Accessibility to Commercial Remote Sensing Software' that gives great background on Exelis support of academic programs like NASA DEVELOP. Exelis is committed to supporting the academic and NGO communities, so if you need access to remote sensing and geospatial software tools, please stop by to speak with us.
Also on Monday (1:40pm-6:00pm| EP13A-0836), David Hulslander will be presenting some of his work on comparing relative bathymetry derived across Landsats 5, 7, and 8, showing how improvements in the Landsat imaging sensors are leading to better analytical results.
On Tuesday (1:40am-6:00pm| IN13A-1408), find me at my poster where I'll be presenting some work spearheaded by Rahul Ramachandran and Manil Maskey at University of Alabama Huntsville on cloud-based collaborative scientific programming environments. This work is exciting because it empowers scientist to collaborate virtually on big-data processing jobs (and it uses cloud-based IDL and ENVI).
Finally, on Friday (11:05am-11:20am | IN52A-04), I'll be presenting a talk on exciting technology we've been applying to solve tough computational planetary science problems, 'Using Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction, and Transformations for Big Data'.
Tags: IDL, ENVI, EDU, AGU, Orthorectification, environmental monitoring
Author: David Hulslander
The AGU Fall Meeting is The Conference That Ate The Geosciences. All the big discoveries, results, and missions get rolled out here. It is a hub for human geospatial progress, from the sun, through the solar system and atmosphere, all the way to the core of the earth. It's a great boost to your science and your career, but it can be hard to navigate, especially for first time attendees. Here are some tips for how to get the most out of Fall Meeting.
With a conference this big, it’s easy to get overwhelmed. While everyone develops their own approach to managing their time there, here are some great ways to make the most of AGU Fall Meeting 2013 (#AGU13):
I’m excited to be going to Fall Meeting again, and I hope to see you there! I’ll be sure to be at the Landsat, GPM, coastal, polar, and Mars sessions, for a start. Follow my coverage of the conference on Twitter (@DavidHulslander), or my friend and coworker Thomas Harris (@t_harris). Or stop by and see me at my poster, EP13A-0836, “A Quantitative Comparison of Traditional and Image-Derived Bathymetry From Landsats 5, 7, and 8” from 1:40to 6:00 PM on Monday! We have several other talks and posters next week, too.
Tags: AGU, Landsat, geospatial, geosciences
Author: Patrick Collins
I recently put ENVI LiDAR to the test by using it to extract a series of features from a LiDAR dataset and matching it up with some satellite imagery to see just how well it performed. The goal was to see just how well the polygons from the automatically extracted building footprints and trees would line up with what could be seen in the imagery. Below we can see a LiDAR collect over a portion of Longview, WA.
After running the automatic Feature Extraction process in ENVI LiDAR, we are presented with the features in QA mode. This mode allows the user to interactively correct anomalies in the extracted features. QA mode allows you to fix roof vectors, tree size, and elevation, as well as reclassify points, and place buildings, trees, or power poles where you want to in the scene.
Once the features have been corrected, it's a simple click to push all of this derived data over to an ArcGIS® instance for further analysis, and to build out your geodatabase. Here we see the buildings footprints, tree locations, and elevation model display in ArcGIS.
The next step was to pull in some satellite imagery from the DigitalGlobe™ Global Basemap. The aerial imagery depicted below provided a nice backdrop to visually assess the accuracy of the ENVI LiDAR feature extraction functionality. Once the data was brought in, I got a rough measurement of one of the trees in relation to the point representing the tree base, and create a buffer around the trees to depict the extent of crown coverage in the area. As you can see ENVI did a pretty good job at capturing the building footprints and the location of the trees. The entire extraction process took a bit under 30 minutes, and while there were some discrepancies between the extracted features and the high resolution imagery, the quickness of the algorithm, combined with the ability to manually fix small issues that may arise with the data, equals a significant reduction in time from manually classifying and extracting features from LiDAR.
Finally, I was able to export all of my features to an ArcGIS geodatabase for later use, hosting on an ArcGIS for Server instance, or hosting on ArcGIS Online.What do you think? Are you involved in updating city database with tree locations or buildings vectors? What other features would be useful to extract from a LiDAR dataset?
Tags: ENVI, Esri, ArcGIS, LiDAR, urban planning
Author: Joe Peters
Vegetation interacts with solar radiation in a different way than other natural materials. The vegetation spectrum typically absorbs in the red and blue wavelengths, reflects in the green wavelength, strongly reflects in the near infrared wavelength (NIR), and displays strong absorption properties in wavelengths where atmospheric moisture is present. The unique properties of vegetation have allowed spectral scientists to develop a number of vegetation indices (VIs) to aid in monitoring the health of vegetation. VIs are combinations of surface reflectance at two or more wavelengths designed to extract useful information about vegetation. More than 150 unique VIs have been developed and published in scientific literature over the past several decades. Many VIs are currently unknown or under-used in commercial, government and scientific communities.
Perhaps the most common, and most often used VI, is the Normalized Difference Vegetation Index (NDVI). The NDVI is a simple, but effective VI for quantifying vegetation. The NDVI normalizes green leaf scattering in the near-infrared wavelength and chlorophyll absorption in the red wavelength. The NDVI is defined by the following equation:
NDVI = (NIR – RED) / (NIR + RED)
NDVI values range from -1 to 1, with the common range for green vegetation falling between values of 0.2 and 0.8. While the NDVI is likely the most common VI, there are a number of other VIs that are worth exploring when using satellite imagery to monitor vegetation. When choosing the appropriate VI to use, it is important to consider what you are interested in getting out of your data. For instance, if you are interested in performing a fire fuel analysis, there are a number of VIs that have been specifically designed to provide an estimate of the amount of carbon in dry states of lignin and cellulose. Dry carbon molecules are present in large amounts in woody materials and senescent, dead, or dormant vegetation. These materials are highly flammable when dry. Dry or senescent carbon VIs use reflectance measurements in the shortwave infrared range to take advantage of known absorption features of cellulose and lignin. One such example of a VI that takes advantage of these features is the Cellulose Absorption Index (CAI). The CAI is useful for identifying exposed surfaces containing dry plant material. CAI is defined by the following equation:
CAI = 0.5(2000nm +2200nm) - 2100nm
The value range of this index ranges from -3 to more than 4, with the common range for green vegetation falling between values of -2 to 4. If you are interested in learning more about VIs and how you can use them to get the information you need from your data, you just might be in luck. I have been working with a colleague to put together a whitepaper that outlines 27 of the most commonly-used VIs and will share it soon!
Tags: NDVI, vegetation analysis, environmental monitoring, vegetation