Author: Kate Heightley
Activity Based Intelligence (ABI) has been a hot topic in the Defense and Intelligence community for some time. The cloud is helping to make the vision for ABI a reality. ABI takes a number of pieces of information and weaves them together to create a consolidated, multi-dimensional picture of what has happened or is happening in an area of interest. For example, social media feeds, base maps, imagery, material detections derived from imagery, and tracks from Moving Target Indicator data all contain valuable information for situational awareness. When they can be analyzed and correlated together, the information provides value beyond that of each piece taken alone, as the pieces provide context and cues that can assist users in finding meaning across the data. Traditionally, much of this data has been held in discrete, isolated enclaves, and it's been analyzed by a single intelligence discipline to provide a singularly focused product. ABI promises to change this and break down the stove pipe effect with its multi-source and multi-discipline approach.
ABI requires collection of large amounts of data over time. As you can imagine, there is a significant amount of data processing and analysis required to build ABI products from all of this data. ABI analysts leverage big data tools and the cloud to identify trends and patterns of life across the data sets creating a more complete operational picture of an area. This analysis provides immediate utility in the generation of ABI products, and it also builds up a backdrop of what is normal so that spotting anomalous activity is easier. Anomalies in the activities or patterns trigger additional analysis activities. For example, if an area that is normally heavily trafficked only between noon and 4 PM is suddenly showing significant activity between 7 and 10 PM, the change probably warrants additional investigation, which may include imagery collection, social media monitoring, and more.
When the data and the analysis tools are hosted in the cloud, the data can be made more widely available and can be more easily correlated, while the analysis tools can leverage processing power and throughput available in the cloud. As ABI matures, it is expected that it will provide better intelligence and also lead to the evolution of additional disciplines helping to predict threats earlier.
Categories: Imagery Speaks
Tags: D&I, Defense & Intelligence, ABI
Author: Mark Bowersox
I've been reading a summary paper "Crowdsourced Geospatial Data" that I found on the Defense Technical Information Center website. The paper discusses a variety of crowdsource projects and common methods for quality assessment of the collected data. There are many great takeaways in the paper, but it drove me to think of how robust imagery analytics might be applied to improve the collection of this volunteered geographic information.
The authors point out that people participating in these projects are not always experts. They do have a passion for contributing, whether it be adding roads and other primary features in Open Street Map, or using Tomnod to delineate storm damage areas or locate shelters used by internally displaced persons (IDPs). In any volunteer effort, it is essential to keep the passion for participation high while maximizing the quality of the results. For crowdsourcing tasks that rely on information extraction from imagery, robust analytics can assist non-expert volunteers in a couple of ways.
First, in the case of a natural disaster, the task is often to delineate the boundary of the event. You might think that damage areas are easy to spot in imagery, and often, they are. However, when asking a person to spend hours or days doing this work, we should make it easy (keep passion high!). Automated change detection techniques applied to pre- and post-disaster imagery can highlight the damage areas. These techniques can provide overlays on the imagery that make it easier to locate and subsequently trace damage areas. And for neighborhood scale objects like buildings, robust change detection may find damaged objects that a volunteer could miss.
Second, the search for shelters or other indicators of displaced persons may require viewing thousands of satellite images. This is time consuming and taxes the volunteer's eyes. There is a category of robust remote sensing analytics referred to as broad area search methods. These methods utilize the spectral content common in today's commercial satellite imagery to narrow in on objects of interest. Algorithms for anomaly detection, spectral complexity, and material identification are examples. Running these analytics on satellite imagery results in a map of pixels of where to look first. Again, this is an overlay that directs the volunteer to the parts of the image most likely to contain shelters (or other objects of interest) and often uncovers things that are invisible to strained eyes.
Tags: tsunami, Anomaly Detection, Change Detection, disaster response, data analysis, geospatial data, multispectral, crowdsource
Author: Thomas Harris
Every December I look forward to the AGU Fall Meeting in San Francisco. It's always an amazing time, visiting the great city of San Francisco with more than 22,000 like-minded geo-science geeks.
This year, I'll be attending the 2013 AGU Fall Meeting with David Hulslander. Both Dave and myself will be presenting some of our recent work at Exelis and seeking opportunities to interact with all the geo-scientists that use IDL and ENVI in their work.
If you're going to AGU, please stop by one of our posters or talks, or, send us a note on Twitter to schedule a meet-up.
On Monday (8:00am-12:20pm |ED11B-0745), December 9th, we'll be presenting 'Academic and Non-Profit Accessibility to Commercial Remote Sensing Software' that gives great background on Exelis support of academic programs like NASA DEVELOP. Exelis is committed to supporting the academic and NGO communities, so if you need access to remote sensing and geospatial software tools, please stop by to speak with us.
Also on Monday (1:40pm-6:00pm| EP13A-0836), David Hulslander will be presenting some of his work on comparing relative bathymetry derived across Landsats 5, 7, and 8, showing how improvements in the Landsat imaging sensors are leading to better analytical results.
On Tuesday (1:40am-6:00pm| IN13A-1408), find me at my poster where I'll be presenting some work spearheaded by Rahul Ramachandran and Manil Maskey at University of Alabama Huntsville on cloud-based collaborative scientific programming environments. This work is exciting because it empowers scientist to collaborate virtually on big-data processing jobs (and it uses cloud-based IDL and ENVI).
Finally, on Friday (11:05am-11:20am | IN52A-04), I'll be presenting a talk on exciting technology we've been applying to solve tough computational planetary science problems, 'Using Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction, and Transformations for Big Data'.
Tags: IDL, ENVI, EDU, AGU, Orthorectification, environmental monitoring
Author: David Hulslander
The AGU Fall Meeting is The Conference That Ate The Geosciences. All the big discoveries, results, and missions get rolled out here. It is a hub for human geospatial progress, from the sun, through the solar system and atmosphere, all the way to the core of the earth. It's a great boost to your science and your career, but it can be hard to navigate, especially for first time attendees. Here are some tips for how to get the most out of Fall Meeting.
With a conference this big, it’s easy to get overwhelmed. While everyone develops their own approach to managing their time there, here are some great ways to make the most of AGU Fall Meeting 2013 (#AGU13):
I’m excited to be going to Fall Meeting again, and I hope to see you there! I’ll be sure to be at the Landsat, GPM, coastal, polar, and Mars sessions, for a start. Follow my coverage of the conference on Twitter (@DavidHulslander), or my friend and coworker Thomas Harris (@t_harris). Or stop by and see me at my poster, EP13A-0836, “A Quantitative Comparison of Traditional and Image-Derived Bathymetry From Landsats 5, 7, and 8” from 1:40to 6:00 PM on Monday! We have several other talks and posters next week, too.
Tags: AGU, Landsat, geospatial, geosciences
Author: Patrick Collins
I recently put ENVI LiDAR to the test by using it to extract a series of features from a LiDAR dataset and matching it up with some satellite imagery to see just how well it performed. The goal was to see just how well the polygons from the automatically extracted building footprints and trees would line up with what could be seen in the imagery. Below we can see a LiDAR collect over a portion of Longview, WA.
After running the automatic Feature Extraction process in ENVI LiDAR, we are presented with the features in QA mode. This mode allows the user to interactively correct anomalies in the extracted features. QA mode allows you to fix roof vectors, tree size, and elevation, as well as reclassify points, and place buildings, trees, or power poles where you want to in the scene.
Once the features have been corrected, it's a simple click to push all of this derived data over to an ArcGIS® instance for further analysis, and to build out your geodatabase. Here we see the buildings footprints, tree locations, and elevation model display in ArcGIS.
The next step was to pull in some satellite imagery from the DigitalGlobe™ Global Basemap. The aerial imagery depicted below provided a nice backdrop to visually assess the accuracy of the ENVI LiDAR feature extraction functionality. Once the data was brought in, I got a rough measurement of one of the trees in relation to the point representing the tree base, and create a buffer around the trees to depict the extent of crown coverage in the area. As you can see ENVI did a pretty good job at capturing the building footprints and the location of the trees. The entire extraction process took a bit under 30 minutes, and while there were some discrepancies between the extracted features and the high resolution imagery, the quickness of the algorithm, combined with the ability to manually fix small issues that may arise with the data, equals a significant reduction in time from manually classifying and extracting features from LiDAR.
Finally, I was able to export all of my features to an ArcGIS geodatabase for later use, hosting on an ArcGIS for Server instance, or hosting on ArcGIS Online.What do you think? Are you involved in updating city database with tree locations or buildings vectors? What other features would be useful to extract from a LiDAR dataset?
Tags: ENVI, Esri, ArcGIS, LiDAR, urban planning