30

Oct

2014

Vacation Planning, Scientifically

Author: Amanda O'Connor

Ok, so if it hasn’t been made abundantly clear, I’m a nerd. I like order, predictability, and tools to help me create that. My husband and I were lucky enough recently to take a vacation to Norway and Iceland. Sitting on the beach sounds nice, but in practice most of our vacations end up being offseason in cold places with poor lodging/dining options and limited information about weather.

Well this trip was no different, except for the limited weather info bit. Like any proper space geek, I like Auroras. There is an other worldliness that never ceases to amaze. Seeing them is bit of a challenge in continental US. Since we had vacation hours to burn, we decided to head north and try our luck. But seeing an Aurora on vacation vs. seeing one if you live in a country that regularly experiences them requires planning and flexibility. There are 4 things you need to see an Aurora:

1.    Extreme Northern or Southern latitude (though strong solar activity means auroras can be seen much further south or north)

2.    Darkness

3.    Solar Activity (even low level)

4.    Clear Skies

Darkness is easy to plan for -- a sunrise/sunset calendar and moonrise/moonset calendar (less moon is better) like the ones you can generate here http://www.sunrisesunset.com/predefined.asp.  It’s also available as an app at the AppleStore, so you can have it on your iPad/iPhone for those times when you’re in the middle of nowhere.

NOAA’s Space Weather Prediction Center (SWPC) here in Boulder is my go to for solar activity (and uses IDL for a lot of their visualization products) http://www.swpc.noaa.gov/SWN/index.html. The effects of a solar flare or Coronal Mass Ejection (CME) takes about 3 days to reach the earth. The display of an Aurora is also impacted by solar wind, the earth’s geomagnetic field, and other factors. SWPC has current space weather and predictions. It also provides information on the strength of the solaractivity, where the auroral ovation can be seen and a lot of additional information you can totally geek over. Stronger CMEs can result in a wider range of colors, the most typical aurora color seen is green.

OK great, so we have darkness, we know what’s happening with the sun, but all of that is worthless if it’s cloudy. Again if you live in Reykjavik, you can just pop out your door every night and take a look, but if you’re in Iceland on vacation and want to see Northern Lights, you have to check weather and keep your itinerary flexible. While in Norway, we just flat out lucked up -- a great Northern lights display despite the“cloudy” symbol at http://www.yr.no/ for our location.  YR.NO is a GREAT website for international weather radar and precipitation prediction, but I was never able to get a cloud cover prediction. Iceland’s Met office, http://en.vedur.is/weather/forecasts/aurora/, knowing Aurora viewing drives tourism, has a fantastic tool for cloud forecasting, which we were able to use to get to clear skies and an amazing Aurora experience. You can see in the graphic below forecast for various cloud layers and that the south eastern part of Iceland has an excellent chance for viewing an Aurora this evening.

The Icelandic Met office also has volcano activity, gas forecasts, seismicity, and pretty much anything you want to know about current meteorological, geological, hydrologic conditions in Iceland. Finding this information was key to my husband, Adam O’Connor, getting pictures like this:

What excites me about information like this is pulling together numerous remote sensing data sources (Stereo data, GEOS, POES, NPP,SOHO, models etc) for something fun. While most of this information is available in the name of public safety and risk mitigation, its second job provides people with a chance to witness one of the best marvels of being a citizen on planet earth. 

Follow me @asoconnor

Comments (0) Number of views (247) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

28

Oct

2014

Spatial/Spectral Browsing and Endmembers

Author: Matt Hallas

The Spectral Hourglass Series: Part 2

Before jumping down the rabbit hole of the Spectral Hourglass Workflow we must define the concept of an endmember, for endmembers are at the core of hyperspectral data analysis. Along with endmembers, we will discuss atmospheric correction and the need to convert our data to apparent reflectance to pursue quantitative analysis down the road.

Endmembers are defined as materials that are spectrally unique in the wavelength bands used to collect the image — that is, endmember spectra cannot be reconstructed as a linear combination of other image spectra. It is often desirable to find the pixels in an image that are the purest examples of the endmember materials in the scene. These pixel spectra can then be used to map the endmember materials in various ways. Throughout this blog series we explore two ways to determine the pixels representing the purest examples of each endmember.

An alternative to extracting endmember spectra directly from the image is to use laboratory or field spectra of the materials of interest to define the target for mapping or classification. One disadvantage of this approach is that it requires comparing lab/field spectra with image spectra. Image spectra — even after calibration and atmospheric correction — often have remnants and artifacts caused by the sensor, solar curve, and/or atmosphere specific to the image. Moreover, lab/field spectra are typically collected from much smaller samples than the pixel size of the image. Image-derived endmember spectra will therefore be more comparable with other pixel spectra in the image. Consequently, using image-derived endmember spectra to define the materials of interest can often lead to better results when looking for those materials of interest throughout the image.

This brings us to the first step in the Spectral Hourglass Workflow. Spatial/Spectral Browsing Preprocessing for nearly all raster images includes use of the same two initial steps: Radiometric Calibration and Atmospheric Correction. Beyond these preprocessing steps you may desire to orthorectify an image, or perhaps mosaic the image. However, these first two steps are required if you wish to perform quantitative analysis later on.

ENVI offers several different tools to convert your data to apparent reflectance: Dark Object Subtraction, Quick Atmospheric Correction (QUAC), and Fast-Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH). You can tell by the name that FLAASH has been developed specifically for hyperspectral data; FLAASH is a sophisticated radiative transfer program that converts both multispectral and hyperspectral data to reflectance. It incorporates the MODTRAN radiation transfer code, modeling atmospheric properties and the solar irradiance curve. Water vapor amounts are calculated on a pixel-by-pixel basis using either the 1135, 940, or 820 nm absorption.


Spectral Profile of Radiance Data

Spectral Profile of Reflectance Data

We convert to apparent reflectance in an effort to remove atmospheric absorption, scattering effects, and the solar irradiation curve. Theoretically, the concept of apparent reflectance states that we are viewing the reflectance of the data found on the surface of the earth, not through layers of atmosphere all the way up to the sensor. In essence, the spectra of each pixel will more accurately represent the surface of the earth that the pixel represents. As we extract our endmembers later in the workflow, we can compare them to library spectra and identify materials with similar spectral angles, allowing us to identify materials in the scene and estimating their total abundance.

Note that we will need to apply a scale factor when comparing our in-scene spectra to our library spectra, due to the controlled nature of laboratory spectra. Unlike our in-scene spectra that is collected using the passive light source of the sun, laboratory spectra are collected with an active light source that does not alter in intensity. Our in-scene data fluctuation will be much more drastic by comparsion.

Once you have completed this first step within the Spectral Hourglass Workflow, it is wise to explore the image and the associated spectra within each pixel to detect the presence of man-made materials, minerals, vegetation, etc. found throughout your scene.

The next part of the blog series will focus on reducing the dimensionality of the dataset and how to separate signal from noise in our data.

If you have any questions please feel free to email me at matt.hallas@exelisinc.com.

Comments (0) Number of views (113) Article rating: 5.0

21

Oct

2014

Creating a Custom Three-Dimensional Visualization with ENVI + IDL

Author: Joe Peters

This past week I decided to take some time to familiarize myself with some of the three-dimensional visualization tools available in ENVI + IDL. Within the ENVI user interface, users can very quickly build a three-dimensional visualization of a scene by using the 3D SurfaceView tool which is available in the ENVI Toolbox. This is a great tool and offers a number of handy surface and motion controls for customizing a three-dimensional visualization. I have worked with this tool quite a bit over the years, but had always been curious about what could be done with a three-dimensional visualization if I leveraged the power of IDL. IDL has a couple of three-dimensional functions that are particularly powerful and allow users to create fully customized three-dimensional visualizations. What I learned is that these functions allow users a tremendous amount of flexibility. In the example I discuss below, I used the CONTOUR function. The SURFACE function also offers some good options for creating three-dimensional surfaces in ENVI + IDL.

For this example, I downloaded a couple of 1/3 arc second USGS DEMs from the National Map Viewer. I first mosaicked the two DEMs and then resampled them slightly to decrease the file size so that they would be more performant in IDL. I then wrote the code which produces the three-dimensional representation. The code opens the DEM file, extracts information about lat/lon extent and pixel size, then draws a grid upon which elevation values can be plotted. Once I figured out how to get this to work, I chose an IDL color table to display elevation ranges, added contour lines, and made some fine-tuning adjustments to the axes labels. There are tons of different adjustments that can be made, so building a custom three-dimensional visualization really does allow users to use their imagination. For reference, my code is shown below:

The nice thing about this code is that replicating this, or something similar, with other DEMs should be fairly easy. All it would require would be to change a couple of lines of code to match the extent of the new image and make some adjustments to the axes. The result of running this code on my DEM is shown below.

There are a lot of pretty cool things that can be done with the three-dimensional functions available in ENVI + IDL. For instance, in this example I plotted elevation values using a USGS DEM, but values that represent something completely different than elevation could have been plotted along the z-axis. This could make for some pretty cool and informative visualizations.

I also decided to make a map of my area using ENVI and ArcMap interoperability. I then inserted an image of my three-dimensional plot into the map. I think it gives a unique view of the scene. You can check out my map below. If you would like to see more examples of what can be done with the SURFACE and CONTOUR functions, check out our Documentation Center for more Graphics Examples.

Comments (0) Number of views (858) Article rating: 5.0

17

Oct

2014

Scalable Image Analysis for Tomorrow and Beyond

Author: Rebecca Lasica

Live in the now or plan for tomorrow? Aren’t we often told to do both? I’ve been thinking quite a bit about the moment and the future as they relate to several topics on my front burner lately. As we released ENVI 5.2 this week – these thoughts are largely relevant. New technology to offer a migration path from the desktop to the cloud is here, as are tools for spatio-temporal analysis and full motion video. It seems as though many aspects of image analytics are changing at once. So it seems appropriate to focus my blog this week on the migration path itself and how some of these technologies are positioning businesses to leap into the future. Here are some related questions I have entertained recently: 


  • What can I do in the cloud that I can’t do at the desktop? Or alternatively – can I do everything I can do at the desktop inthe cloud?  This is probably the single most often asked question lately. The answer is largely – it depends. For the most part, yes - ENVI analytics you enjoy today can be accessed via API that enables cloud processing. But digging a bit deeper one should most definitely look at the new ENVI Task. These tasks take powerful analytics that were already available –and expose them in a new – and in my opinion – much easier paradigm to implement. What that means? Prototypes that used to take me an hour or two to write now take me 10’s of minutes. I’m sure you will notice the same. If you don’t – please give me a call.

  • What exactly is time-enabled data and what can I do with it that is new? I love this question because I am so excited about spatio-temporal analysis. The time enabled data I have seen come across my desk lately tend to come in three different forms. The first is the obvious – data that have time metadata enabling one to sort through an image collection chronologically. Visualizing information over a period of time is a powerful tool. Think about watching a field go from planting to maturity, or think about looking at the speed at which a flood can wreak havoc over a mountainside.  Another less obvious capability accessible with spatio-temporal analysis tools is the ability to sort information in and order you wish. Animating through a data collection in a certain order can shed light information that might otherwise be missed. For example, imagine the ability to take several non-chronological frames over an area of interest and animate through the frames in sequence – perhaps missing irrelevant information in-between.  With this capability, situational awareness takes on a new meaning.

  • And finally, one of the most popular uses of animating through a stack of data is to look at analysis products. For example – periodic MODIS temperature data can be analyzed to derive draught conditions over a particular area. This can be done with every platform revisit– in this case every 8 days. Viewing information such as draught conditions, vegetation health, water indices, or burn inform

Comments (0) Number of views (508) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

14

Oct

2014

Leveraging the Spatial, Spectral, and Temporal Value of a New WorldView

Author: Patrick Collins

A lot of people may not know it, but Exelis Inc. has designed the optical components for every single satellite that DigitalGlobe™ Inc. has in its constellation today. This includes QuickBird, WorldView 1, 2, 3, and 4, as well as IKONOS and GeoEye1. This gives us a unique ability to incorporate sensor-specific camera models into our software to more accurately extract information from DigitalGlobe data.

ENVI takes advantage of unique characteristics of DigitalGlobe data in order to answer geospatial questions and to solve problems. Three of these characteristics that I'd like to cover in this blog are spatial, spectral, and temporal.

All of DigitalGlobe's satellites capture imagery at better than 1 meter resolution, with many of them capturing data at better than 50cm resolution. A recent relaxation of operating restrictions on DigitalGlobe by the National Oceanic and Atmospheric Administration (NOAA) now means that DigitalGlobe will soon be selling imagery at better than 50cm resolution.,which enables ENVI to extract more precise information out of the data. An example below shows a three dimensional depiction of a WorldView-2 derived digital elevation model (DEM) with a pan-sharpened false-colored image overlaid on it.

The high spatial accuracy of DigitalGlobe data allows for the extraction of high resolution elevation models for a better understanding of on-the-ground conditions and terrain.

Another quality of the WorldView constellation is the unique spectral bands that are being captured by the sensors. WorldView 2 was the first high resolution satellite to capture data across 8 different imaging bands,and WorldView 3 boasts an impressive 27 bands earning it the title of the world's first super-spectral satellite. ENVI takes advantage of these bands by incorporating sensor-specific spectral indices that can be calculated easily from within the user interface. The latest release of ENVI includes 64 common spectral indices that can be calculated, with 44 of them capable of being run against WorldView data. These indices make it easy to analyze things like soil moisture, water content in a scene, vegetative health, and more. Below we can see a WorldView Improved Vegetative Index overlaid on top of a pan-sharpened WorldView-3 image.

This index takes advantage of the spatial and spectral resolution of the satellite to help us visualize and extract fields or other vegetated areas that are healthy versus those that may need some extra love and attention. Also, full support for spectral libraries means that ENVI can use DigitalGlobe data to accurately target and identify materials such as crop-type, mineral outcroppings, and more.

The final characteristic of DigitalGlobe data I wanted to highlight is the amazing temporal coverage they have over the entire world. The temporal completeness of the DigitalGlobe catalog means that they have the data needed to see and quantify changes that occur on specific areas of the Earth. In the latest release of ENVI, we've created a Spatiotemporal tool box that allows you to quickly and easily create raster time series from multiple images and to display those images as a component of time. Derived products can also be fed into the time series to show a specific analysis over time, or multiple time series can be run and linked together to show how two different image series interact with each other over time. We're really excited about the introduction of this capability into ENVI, and I look forward to seeing how we expand upon our understanding of temporal analytics in an effort to provide more robust solutions to the geospatial analyst.

As DigitalGlobe and Exelis Inc. work together to create the highest resolution, most spectrally unique satellite constellation in existence,our goal is to ensure that ENVI has all of the tools necessary to fully exploit these unique datasets and solve some of the world's toughest problems, geospatial and otherwise.

What do you think? What advantages do you see in the increased spatial, spectral, and temporal content being produced by DigitalGlobe today?


***This blog is based on a Webinar given October 14, 2014 in conjunction with DigitalGlobe Inc. To view the webinar, please feel free to visit http://digitalglobe.adobeconnect.com/p3oz5b2h67f/

Comments (0) Number of views (695) Article rating: 5.0
12345678910 Last

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

© 2014 Exelis Visual Information Solutions