27

Aug

2015

A New Age of Global Securities

Author: Matt Hallas

These past few days I attended that ENVI Analytics Symposium and had the distinct pleasure of listening to some thought-provoking and inventive presentations from people in the geospatial industry trying to solve the big problems facing us today. These problems come in the form of data bottlenecks that will only be made worse with the continued deployment of new sensors, to locating and attempting to quantify human rights violations using satellite imagery. We have so much information at our fingertips at this point but people are struggling to pick and choose what is the important data that can help solve a complex issue, and what is simply taking up space on our storage devices.

One issue that has me pondering about the future of our society is that of global security. The National Security Strategy for 2015 was published in February of this year and the forward by President Barack Obama highlights a major shift in the idealogies of global governments, "Moreover, we must recognize that a smart national security strategy does not rely solely on military power."

When one thinks about national security they probably picture F-16s, Kilo-class nuclear submarines and quantifiable military strength. We are shifting the paradigm to realize that a strong national security strategy incorporates the idea that the climate, education, healthcare, and diplomatic strength of our country is an integral part of what makes up our total national security. This point was brought up by a man who knows a thing or two about our national strategy, former head of the National Geospatial Agency, Vice Admiral Robert Murrett.

Vice Admiral Robert Murrett (Ret) moderated"The Role of Analytics in Global Security Issues" panel at the 2015ENVI Analytics Symposium.

After delivering the keynote address yesterday morning, Vice Admiral Murrett then led a series of panel discussions that helped to extract the big issues facing global securities. The panelists, Dr. Andrew Marx with the Claremont Graduate University, Dr. John Irvine with the Charles Stark Draper Laboratory,  and Dr. Alex Philp of Adelos, Inc.,  all work in the realm of global securities and had some fascinating insight.

Dr. Marx's work focuses on monitoring human rights violations throughout the world using medium-resolution imagery sensors such as Landsat. By developing a baseline average of what a pixel "looks like" over a number of years his research team have been able to predict the location of SCUD missile attacks in Syria with 90% accuracy. Identifying the location of human rights violations as soon as possible can help with convictions of war crimes as well as distribution of aide and support the affected regions. 

Dr. Philp delivered a fascinating presentation titled the "Internet of Things", mainly focused on the massive increase in device connectivity that will be attained in the coming 5-10 years. One of the biggest points that Dr. Philp brought up in the panel which resonated with me is that, "we don't need everything forever" and that eventually "we will run out of time". This connects so clearly with the concept of global security; if analysts are too over-burdened with an overwhelming amount of information they will be less effective at accomplishing their main task. Being able to come up with some sort of "probabilistic interpretation" of our data will be required in order to actually maintain the flow of information into products. The sheer amount of data we will be dealing with in the coming years is truly overwhelming and it will be a necessity to filter out the data which is not helpful as early in the workflow as possible. 

Dr. John Irvine then piggybacked on this concept to discuss how there needs to be much better coordination across analyses so that when we have discovered something of value, this work is not duplicated or ignored. Overall, these four gentlemen helped to shed light on the many issues which comprise Global Securities and the work that will need to be done in order to assure we have global food and water secutiry, among other factors.



Comments (0) Number of views (150) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

25

Aug

2015

When Should I Correct My Imagery for Atmospheric Effects?

Author: Jason Wolfe

This is one of the most frequently asked questions that we receive, especially from those who are getting started in remote sensing. The answer depends on the particular application and sensor, but in general you should correct for atmospheric effects before doing any spectral analysis with optical imagery. In this article I will give some guidelines on when to apply atmospheric correction and the best methods for each application.

Atmospheric correction of optical imagery typically means removing the effects of clouds and aerosols from a radiance image. The result is an apparent surface reflectance image, which can be used to extract accurate spectral information from features on the Earth's surface.

You can also calibrate imagery from some sensors to top-of-atmosphere (TOA) reflectance if sufficient metadata are available. See the article Digital Number, Radiance, and Reflectance for more information. Is it sufficient to calibrate an image to TOA reflectance without going through the process of creating a surface reflectance image? Look at the following WorldView-3 images, courtesy of DigitalGlobe. Both images were scaled from 0 to 1 in units of reflectance, and both are displayed with a 1% linear stretch: 

They are visually identical because only the RGB bands are displayed. However, a spectral plot of a single pixel reveals differences between the two images:

This difference illustrates the importance of removing atmospheric effects from a calibrated image. Here, the surface reflectance curve more accurately represents the characteristics of healthy vegetation with a steeper red edge curve near 700 nanometers, compared to the TOA reflectance curve.

Next, we will look at more specific applications.

Classification and Change Detection

In general, atmospheric correction is unnecessary prior to unsupervised image classification or change detection. Chinsu et al. (2015) suggest that atmospheric correction will not improve the accuracy of results in land use and land cover (LULC) classification.

An article by Song et al. (2011) provides more detailed guidelines, suggesting that correction is unnecessary in classification and change detection except when training data from one time or place are applied in a different time or place. Even then, a dark subtraction method is often sufficient.

For supervised image classification, if you use spectral signatures from a spectral library as endmembers or training samples, atmospheric correction is usually required. This is because spectral libraries collected from the field uses surface reflectance.

Image difference change detection between a Landsat TM image (1984) and Landsat-8 image (2013) of the Amazon rain forest. This process required radiometric normalization but no atmospheric correction.

Spectral Indices

Calibrating imagery to apparent surface reflectance yields the most accurate results with spectral indices. This is especially important for hyperspectral sensors. It also ensures consistency when comparing indices over time and from different sensors.

Some vegetation indices such as NDVI are more sensitive to atmospheric effects than others. For example, the Atmospherically Resistant Vegetation Index (ARVI) and related indices such as GARI and VARI were designed to minimize the effects of atmospheric scattering in the blue wavelengths. Atmospheric correction is unnecessary with these indices.

When using multispectral imagery (e.g., Landsat TM) to compute spectral indices, a simple method such as QUAC® or dark subtraction may be sufficient to account for atmospheric effects (Hadjimitsis et al., 2015). Some may prefer a more rigorous, model-based method such as FLAASH®. These tools are part of the ENVI Atmospheric Correction Module and should be used with super-spectral imagery such as WorldView-3, and with hyperspectral imagery.

Green Vegetation Index (GVI) image created from a Landsat-8 scene of the Central Valley, California, 21 May 2015. The original scene was calibrated to spectral radiance, then QUAC atmospheric correction was applied before computing GVI.

Material Identification

Hyperspectral and super-spectral sensors are used for applications such as material identification. When analyzing imagery from multispectral sensors such as Landsat TM or GeoEye, atmospheric effects are of a lesser concern because the channels are designed to accomodate atmospheric gas absorption features. However, hyperspectral and super-spectral sensors cover all of the visible and near-infrared spectrum, including absorption features.

You can use QUAC or FLAASH to remove the effects of atmospheric scattering and gas absorption to produce surface reflectance data. See the Preprocessing AVIRIS Data Tutorial for an example.

QUAC is simple to use and produces accurate results in the following conditions:

  • The scene must have several diverse materials; QUAC will not work well in a homogenous scene.
  • Pixels of ocean or large water bodies should be masked out first.

The following plot compares reflectance values of different correction methods for a single pixel of healthy vegetation from an EO-1 Hyperion image. The two most rigorous methods--QUAC and FLAASH--create reflectance profiles that more accurately represent healthy vegetation. The absorption features of FLAASH and QUAC results are closer to the reference spectrum, compared to dark subtraction and TOA reflectance.

Many good resources are available that explain atmospheric correction in more detail. This article only touched on the subject, but hopefully you have some tips to get you started.

References

Chinsu, L., C.C. Wu, K. Tsogt, Y.C. Ouyang, and C.I. Chang. “Effects of AtmosphericCorrection and Pansharpening on LULC Classification Accuracy Using WorldView-2 Imagery.” Information Processing in Agriculture 2 (2015): 25-36.

Hadjimitsis, D.G., G. Papadavid, A. Agapiou, K. Themistocleous, M.G. Hadjimitsis, A. Retalis, S. Michaelides, N. Chrysoulakis, L. Toulios, and C.R .I. Clayton. “Atmospheric Correction for Satellite Remotely Sensed Data Intended for Agricultural Applications: Impact on Vegetation Indices.” Natural Hazards and Earth Systems Sciences 10 (2010): 89-95.

Song, C., C. Woodcock, K. Seto, M.P. Lenney, and S. Macomber. “Classification and Change Detection Using Landsat TM Data: When and How to Correct Atmospheric Effects?” Remote Sensing of Environment 75 (2001): 230-244.

Comments (0) Number of views (352) Article rating: 5.0

20

Aug

2015

Taking a deep dive into SAR data

Author: Rebecca Lasica

I feel lucky this week to be immersed in a SARscape class where we’ve been diving into the weeds of Synthetic Aperture Radar (SAR) analysis. Alessio Cantone is here, all the way from Italy just to teach this class and has brought with him a level of knowledge and experience about SAR that we don’t get to see very often. So today I’d like to share with you just a couple of the things I’ve learned so far this week.

First, I have often wanted to take a deep-dive into “What is coherence as it relates to SAR data?” and yesterday we spent several hours on just that very topic. It turns out that coherence is a measure of stability over time both in the amplitude and phase components of the signal. In other words, a coherence image will show us what has changed, and one of the greatest things about the “how” behind it is that we can see change in coherence images that we could never distinguish with our eyes from an optical image.

 

Courtesy of Sarmap

Here’s how it works. First let’s think about phase. Say you have a pulse with 3cm wavelengths and you are imaging a forest. The size of the wavelength is very similar to the size of the leaves thus the signal will interact with the canopy. Due to wind and motion of the canopy, it is unlikely that images from time 1 to time 2 will be similar, and therefore coherence I slow. Conversely, think about a manmade structure that does not move. Likely the signal will be highly similar from time 1 to time 2 thus coherence will be high.

Overall, so far we have considered the phase component of the signal. The amplitude portion of the signal is a measure of how much signal is returned. This will be very high for manmade objects, and very low for water, where vegetation will give an average and inconsistent return due to signal bouncing around in the canopy.

A popular way to visualize the phase and amplitude components together in order to extract a meaningful visualization is to load both the coherence and amplitude components into different channels of the display. For example, loading the coherence image into the red channel, the amplitude average into the green channel, and the amplitude variation into the blue channel, we come up with a false color image as the one above.

High coherence with high amplitude represent urban features thus urban areas appear yellow. Vegetation generally has low coherence and consistent average amplitude without much amplitude variation thus it appears green, and features with low coherence that also may have some amplitude variation but almost very low average amplitude represent water which appears very dark or even blue.  

Overall, SAR data are highly complex but fascinating to work with and have relevant applications across industries including vegetation analysis, change detection, feature extraction, highly accurate terrain calculation, and many other use-cases. I look forward to learning more and now I better get back to class!

Comments (0) Number of views (352) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

18

Aug

2015

Advanced LiDAR Analysis Improves City Infrastructure Management

Author: Patrick Collins

Cities all over the world share an interest in infrastructure management. The rising cost of maintaining ever growing networks of roads, bridges, utility lines, and other infrastructure often results in pieces of it remaining in disrepair due to lack of funding. In the future, municipalities will look to advanced geospatial analytics to reduce the financial and resource costs associated with monitoring and maintaining such a large infrastructure network.

Geographic Information Systems have been used historically to map the extent of city infrastructure, however this practice can be static in nature. While it may capture an accurate representation of the current state of things, it still requires people going out to check assets and manually identify infrastructure in need of repair. Many current GIS practices don’t go far enough to reduce the impact of monitoring and maintenance activities on a monetary level.

Improved data accuracies and analysis techniques, combined with a reduction in the cost of collection, have made it possible to conduct infrastructure management more efficiently, effectively reducing the time and resources needed to identify pieces of infrastructure that need repair. As an example, ground based LiDAR can now be used to identify power poles and to calculate a number of different important attributes that are helpful for municipalities trying to track their poles.

Automated routines are used to identify power pole features from the point cloud by recognizing certain characteristics that are representative of that feature. A viewer then displays the subsetted point cloud, along with the ability to rotate the pole, zoom into it, or to show only the feature itself and not the ground points. The graph on the right shows the height and width of the feature. The viewer also displays relevant attribute information for the feature, including the pole’s height and the overall tilt of the pole.

This information is automatically generated by the routine,and provides valuable information to city utility crews that need to assess which poles need repair because they are leaning too far to one side. The user can then click through the identified features and see the resulting metadata. This information can also be used to generate a shapefile of the pole which can be displayed in three dimensions. These shapes can then be plotted to create a heat map of an area with poles identified that might need repair.

This is just one example of how advances in geospatial analysis can decrease the cost of monitoring city infrastructure. Geospatial analytics can also be used to track utility assets, identify potholes, determine bridges that need repair, and more. Contact us for more information on how we can design custom solutions for you or your municipality to better track your city’s infrastructure.

 

Comments (0) Number of views (547) Article rating: 5.0

13

Aug

2015

A Look at Crowdsource Geospatial Data

Author: Tracy Erwin

Crisis Mapping

We are living in a changing information landscape associated with social media, high-speed networks and distributed information sharing from people all around the world.

There is a movement in open source software and communities of thousands of people voluntarily contributing data. Contribution of geospatial data, what some call Volunteered Geographic Information, has raised concerns on data quality. However, there is a benefit to this type of data produced by end-users having significant local expertise instead of a central authority (i.e. government and businesses) that may not be aware or capable of detecting changes in local environments.

OpenStreetMaps (OSM) founded in 2004 is an example of people from across the globe working together to collect and contribute data to the free, editable map of the world. They were instrumental in aiding the near real time crisis mapping of the 2010 Haitian earthquake. This effort established a model for non-governmental organizations (NGOs) to collaborate with international organizations. Volunteers from OSM and Crisis Commons used pre-existing satellite imagery to map the roads, buildings and refugee camps of Port-au-Prince in just two days to build a digital map of Haiti's roads. This became the backbone for software that helped organize aid and manage search-and-rescue operations.

Figure 1: Example zoomed in area of former Haiti renderer on openstreetmap.nl  The Haiti custom rendering set up by User:Ldp which shows damaged buildings and refugee camps mapped within OpenStreetMap using specialGeoEye/DigitalGlobe imagery.

OSM also played a significant role in the Ebola virus epidemic in West Africa. Locations of roads, towns and buildings was unknown. There was an immediate need for geospatial data and maps.

Figure 2: Pascal Neis Map showing OpenStreetMap activitiesduring West Africa Ebola Outbreak

We are witnessing a shift in how geographic information is created and shared with contributions from passionate communities. Transparency and movement in collaboration is expanding as shown by National Geospatial Agency (NGA) teaming with Digital Globe, ESRI and OSM supporting disaster relief efforts in response to the Ebola epidemic in Western Africa (2014) and the Nepal earthquake (2015).

For the Ebola epidemic, NGA used ESRI’s ArcGIS Online, OpenStreetMap foundational data and DigitalGlobe commercial imagery and human geography data sets to provide a public website with 500 data layers, more than 200 products and about 70 applications. The website was viewed more than one million times over the period of October 2014 to February 2015.

Figure 3: A DigitalGlobe WorldView-2 satellite image ofMonrovia, Liberia, taken April 8, 2014, is overlaid with DigitalGlobe Landscape+Human features. Photo Credit: DigitalGlobe

NGA followed its Ebola response method by launching a public website to assist with relief efforts the day after Nepal was struck by a devastating earthquake on April 25, 2015. The Nepal site hosts unclassified GEOINT data, products, and services. DigitalGlobe made its high-resolution satellite imagery available online. Volunteers tagged damaged buildings, roads, and other areas of major destruction using DigitalGlobe’s Tomnod crowdsourcing platform. In addition to the offerings that DigitalGlobe has made available, DigitalGlobe and Exelis Visual Information Solutions (a subsidiary of Harris) put their new partnership to good use with Amazon instances that allowed free access to the data and ENVI image analysis software for anyone that wanted to lend their image analysis skills to generate useful products for response and recovery efforts.

Figure 4:  PhotoCourtesy of Digital Globe The Tomnod team has released a Nepal earthquake dataportal with a dynamic map of the latest crowdsourcing results.

Big Data

DigitalGlobe’s partnerships and recent effort to open source MrGeo is making it easier for data scientists and engineers to apply their expertise on spatial data. Crowdsourcing is moving to the next level beyond crisis mapping to big data analytics. In June, DigitalGlobe partnered with the United States Geospatial Intelligence Foundation (USGIF) to co-sponsor the first GEOINT-focused Hackathon.

Participants were requested to apply their expertise to DigitalGlobe Geospatial Big Data to create an open-source solution. There were two goals that participants were tasked with: (1) Expose their team’s thinking and build in hooks so another team working with another geography or outbreak could modify the solution to a new set of conditions and (2) determine why certain areas of West Africa were unaffected by the Ebola outbreak and predict where additional outbreaks might occur.

DigitalGlobe made available their imagery, human geography, elevation data, geospatial social media, and OpenStreetMap features available via a set of open APIs. The first place team’s solution focused on travel and revealed an “Ebola superhighway” along the coast of West Africa. See more details of the results here.

On the horizon

The crowd source community and collaboration analyzing massive amounts of distributed data to draw insights about a situation can result in increased productivity and innovation. Tapping into this collective intelligence results in diverse perspectives that are critical factors to moving innovation further, faster.

I suspect we will see more crowdsource problem solving as large amounts of globally distributed data continue to grow at a rapid rate and look forward to the collaboration solving tough, real-world problems.

Comments (0) Number of views (335) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

12345678910 Last

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

© 2015 Exelis Visual Information Solutions