28

Jul

2015

Using ENVI and MODIS Imagery to Assess Drought Conditions

Author: Jason Wolfe

Satellite remote sensing can help us monitor drought over large areas. In this article, I will show how I used ENVI to look at drought-related spectral indices for California in 2011 (normal precipitation year) and 2014 (drought year).

From late 2013 to present, California has faced a severe water shortage resulting from scarce precipitation and above-average temperatures. In Spring of 2014, the U.S. Drought Monitor showed that all of California was in the "Severe Drought" or higher category. Parts of California are still experiencing severe drought conditions to this day.

California drought severity maps, courtesy of the U.S. Drought Monitor (http://droughtmonitor.unl.edu)

We often think of drought as a period of abnormally low rainfall; however, it is more complex than that. Several environmental factors can lead to drought. When soil and vegetation give up water to the atmosphere (a process called evapotranspiration) while precipitation decreases over time, less moisture is available for vegetation uptake. In agricultural regions, this severely affects the livestock and people who depend on crops.

Because drought is associated with vegetation health, vegetation indices are often used to assess drought conditions. A commonly used index is the Normalized Difference Vegetation Index. NDVI is not a direct indicator of drought, but it can help reveal the spectral response of stressed vegetation resulting from low water intake.

NDVI remote sensing images are available on a regional to global scale. MODIS/Terra images are ideal because they provide a view of surface conditions over a large geographic area. At 500-meter spatial resolution, MODIS NDVI data can reveal patterns of vegetation health over county- or watershed-level extents.

I used the “Vegetation Indices 16-Day L3 Global 500” product (MOD13A1), which includes both NDVI and Enhanced Vegetation Index (EVI) images, averaged over 16-day periods. I downloaded a series of MOD13A1 image tiles that comprised most of California from the NASA Reverb/ECHO site, from April through June of 2011 (normal precipitation year) and 2014 (drought year).

I wrote a short batch script with the ENVI API that performed the following steps for each season of images:

  • Extracted the NDVI band
  • Reprojected the individual tiles from a sinusoidal projection to a Geographic WGS-84 projection
  • Created a mosaic from the tiles
  • Defined a spatial subset that included only the state of California and western Nevada
  • Constructed a time series of these mosaics

I displayed the NDVI images in ENVI and applied a color table to them. Here are some thumbnail images that show the seasonal time series for 2011 and 2014:

One of the most dramatic differences between 2011 and 2014 was in the southern part of the Central Valley in early spring: 

I also read a journal article by Zhang, et al. (2013) that compared drought-related spectral indices derived from MODIS surface reflectance data. One of these is the Surface Water Capacity Index (SWCI, Du et al., 2007), which highlights surface soil moisture. Using MODIS reflectance bands, the SWCI equation looks like this:

SWCI = (Band 6 - Band 7) / (Band 6 + Band 7)

I was curious to see how this would compare with the NDVI images. I wrote another batch script with the ENVI API that used band math with the MODIS reflectance data (MOD09A1) to derive a time series of SWCI images. After displaying the images and applying a color table in ENVI, I could see some differences in soil moisture between 2011 and 2014, including this example:

Studying drought with remote sensing is a complex endeavor, and this article only touched on the subject using spectral indices. We could take this a step further by constructing a vegetation condition index (VCI) that normalizes NDVI on a pixel-by-pixel basis over time. Another option is to construct a temperature condition index (TCI) that normalizes MODIS land surface temperature measurements over time. These are all simple tasks when using ENVI's API and image-analysis tools.

References:

Du, X., S. Wang, Y. Zhou, and H. Wei. “Construction and Validation of a New Model for Unified Surface Water Capacity Based on MODIS Data.” Geomatics and Information Science of Wuhan University 32, No. 3 (2007): 205-207.

Karnieli, A., N. Agam, R. Pinker, M. Anderson, M. Imhoff, G. Gutman, N. Panov, and A. Goldberg. “Use of NDVI and Land Surface Temperature for Drought Assessment: Merits and Limitations.” Journal of Climate 23 (2010): 618-632.

Mu, Q., F. Heinsch, M. Zhao, and S. Running. "Development of a Global Evapotranspiration Algorithm Based on MODIS and Global Meteorology Data." Remote Sensing of Environment 111 (2007): 519-536.

Zhang, N., H. Hong, Q. Qin, and L. Zhu. “Evaluation of the Visible and Shortwave Infrared Drought Index in China.” International Journal of Disaster Risk Science 4, No. 2 (2013): 68-76.

MODIS data are distributed by the Land Processes Distributed Active Archive Center (LPDAAC), located at USGS/EROS, Sioux Falls, SD. http://lpdaac.usgs.gov.

Comments (0) Number of views (159) Article rating: No rating

24

Jul

2015

Saving and Restoring ENVI Sessions

Author: Barrett Sather

I've been using the latest release of ENVI for a while now, and have gotten used to the new bells and whistles. My favorite though, is the ability to save your work! Now, if I can't finish a project in one sitting, I can save the current session, and restore it later.

The mechanics of the save are quite simple; ENVI stores all of the open layers, files, ROIs, vectors, etc... in a text file in Javascript Object Notation Format (or JSON). All of the properties of layers like bands loaded, brightness, and transparency are all saved as well. This way, when you restore a previous session, ENVI knows the steps to take to get back to the state you were in during the save. I like it. Elegant, and simple. To get more info on how to use Save / Restore session head to the page in our documentation center.

A couple things to remember when using this save mechanism:

  • ENVI only restores files and any properties like stretch, and bands. If you make changes to a shapefile or ROI, it is best to save those files as well as ENVI's state in order to get back the expected layers and files. In other words, Save Everything!
  • This method makes it so that your save files arevery small, since it is only text. Because of this, ENVI will have to restore all of the file connections and reload them to the display. It's a trade off - smaller save file means a longer restore time.
  • Not just raster layers are saved - even display tools like annotations and portals can be restored from the JSON save file.

Here's an example of an ENVI session that will be fully restored by saving, quitting the application, then resorting the session:

  

So what gets restored for this particular example?        

  1.         The two raster files in the display with the same band combination and properties.
  2. The Region of Interest over the building.
  3.         The text annotation "SAVE ALL".
  4.         The portal, and portal location.
  5.         The positioning - ie. the zoom level, center of the screen, and rotation.


This is a simpler example of what this tool can do, as I set this view up in just a few minutes. If you've been working for an hour though, and want to save your work for after lunch, or even till Monday, this is a safe way to do it without taking up much disk space.

Comments (0) Number of views (172) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

23

Jul

2015

ENVI Tasks – So easy even I get it

Author: Joey Griebel

I am not a coder. I try to understand, but often find my self scratching my head. I do see the value in tasks, and understand that the sky is really the limit of what I can do with analysis on the web, or even more rapidly on the desktop. So how do I bridge the gap? ENVI Tasks. Below is a screenshot of the Task for Spectral Indices. We take the complication out of coding the process and literally define each step so there is no gray areas. It is literally copy, paste, update directories, compile, and run. Seems simple enough.

While there are over 100 tasks in ENVI+IDL currently, that is not the limit. From there, it only gets better. Add in someone who understands tasks and has a passion for what they do and so much is possible. You can take an algorithm you read in a journal flying back from a tradeshow, and now you have Landsat Built-Up Land task.

(Pic Credit JoePeters Built Up Land Task)

Now you can run an analysis with one click, or add it into a batch process, kick the script off and take on huge datasets while you are away over the weekend. Take it one step further and deploy these simple task in a web instance and now you can have users in the field running NDVI on a tablet while they are standing in the actual field.

(Pic Credit Beau Legeer’s ESE Spectral Indices Web App)

ENVI Tasks are literally that easy and the possibilities are truly endless of how you can grow and deploy your analysis.

ENVI Tasks, I get them. 

Comments (0) Number of views (407) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

16

Jul

2015

Working with WorldView-3 SWIR Data in ENVI and New Horizon's View of Pluto

Author: Matt Hallas

Exploiting Short Wave Infrared Data

DigitalGlobe has been pushing the boundaries of commercially available satellite imagery for years, and the addition of the WorldView-3 sensor to their satellite constellation has image scientists giddy with excitement. Even thoughWorldView-3 has been airborne for nearly one year at this point (August 13, 2014 launch date) there is not much information on the web regarding the use of the SWIR bands, we hope to rectify that.

SWIR extends beyond the near-infrared region of the electromagnetic spectrum and refers to non-visible light falling roughly between 1400 and 3000 nanometers. The benefits of collecting reflectance data in these wavelengths are vast, including improved atmospheric transparency, snow and ice distinction, smoke penetration and the identification of man-made materials. For our purposes in this blog we will be highlighting how SWIR data allows an analyst to easily delineate between man-made materials, in our case materials used for roofing.

When you look at an image in true-color you are viewing what the human eye would see from a plane or a helicopter and this is great for spatial context. However, oftentimes the absorption features which define a material will only be apparent in the short wave infrared region of the electromagnetic spectrum, you cannot see these differences in true-color but a sensor collecting data beyond the visible range will detect these differences.

The image below was created with a true color composite of multispectral imagery provided by DigitalGlobe over Fullerton, California. With the MSI it appears as though our two rooftops are made of the same white material. Even when we display a spectral profile of the two rooftops it appears from the multispectral imagery that the pixels simply vary in brightness and contain similar absorption features. If only there was a way we could expand the extent of the x-axis to include more of the electromagnetic spectrum....

  

The image below is created by displaying SWIR 2 (1570 nm) as Red, SWIR 1 (1210 nm) as Blue and SWIR 8 (2330 nm) as Green, over the same extent as the image above. The pixel size is larger with SWIR data compared to MSI data, but the added coverage of the electromagnetic spectrum in the SWIR data makes up for the pixel size. The first apparent difference in the image below is that these two rooftops are very clearly different colors, and this is due to the different reflectance values seen in the currently displayed SWIR bands. The roof on the right appears purple because those pixels have high reflectance values in the bands being displayed as blue and red, SWIR 1 and SWIR 2 in our case. The roof on the left appears yellow because those pixels have high reflectance values in the bands being displayed as red and green, SWIR 2 and SWIR 8 in our case. The difference in reflectance values in the SWIR 1 band and the SWIR8 band is what is allowing us to view the difference in material-type for these rooftypes. Without a sensor that collects reflectance data beyond 1400 nanometers we would have difficulty identifying the differences in these materials. You can easily tell from the spectral plots of these two roofs that they are very different materials. If we only had the MSI data then we would not have the same spectral detail and thus have a more limited ability to delineate between these man-made materials.



SWIR data allows you to see what the human eye cannot. Oftentimes an object or feature will appear homogeneous with multispectral imagery, but with short wave infrared imagery the features are clearly different. The ability to augment your results by having data covering more of the electromagnetic spectrum will only help you to create better products down the road, and the ENVI suite of tools will help to exploit this added information.

To make this even more relevant many of the images we will be seeing of Pluto and its moons, collected by the New Horizons NASA mission, will be collected within the SWIR range of the electromagnetic spectrum. In fact the first detailed image released was a false-color composite created from SWIR bands which helped to show the presence of large methane-ice deposits on the surface of Pluto. For more information on this mission and how infrared imagery will lead the way in determining the chemical composition of Pluto and its moons go to the New Horizons page on the JHUAPL site

The figure above is courtesy of NASA-JHUAPL-SwRI

Comments (0) Number of views (714) Article rating: No rating

14

Jul

2015

A Quick Data Product Levels Primer

Author: Peter DeCurtins

Catagorey Levels Describe the Degree of Data Processing Applied to an Image Product

In 1986 NASA defined a set of processing "levels" to classify standard data products that were to be produced from remotely sensed data from their Earth Observing System. The idea was that the given level of any output product would indicate the type of data processing that had been applied in creating it, allowing the consumer of that product to know what the appropriate uses for it are. NASA set forth brief definitions of each level.

NASA Earth Science Division Operating Missions as of February 2, 2015 - NASA, Public Domain


One key aspect to this system is that each level is cumulative, deriving from the level below it and representing the prerequisite input to the processing needed to reach the level above it. Level 0 data is basically the raw, unprocessed instrument and sensor data. Although at that fundamental level it may be of use to someone who is interested in the calibration and sensitivity of the sensors that collected it, the main utility of level 0 data is as the raw source that is fed to the data processing chain to produce higher level output. Level 1 data can be reversed back to its level 0 state, and is the foundation for all higher level data sets that may be produced.

At level 2, data sets become directly usable for most scientific applications. These data sets may be smaller than the level 1 data that they were derived from as they may have been reduced in some aspect such as spatial extent or spectral range. Level 3 products tend to be smaller still, making them easier to handle, and the regular spatial and temporal organization of these data sets make them appropriate for combining with data from differing sources. Basically, as you go up in processing level, the data sets grow smaller, but their value and utility to scientific applications gets larger.

The advantages of adopting a common set of processing levels to describe the types and degree of processing that an image has had applied to it quickly became clear. The practice seems to have grown to be universally adopted, though many variations do exist. In general, the following definitions are taken to be a standard of sorts:

Level 0: Raw instrument data, as collected by the sensor. Data in this state is not terribly useful, unless the focus of interest is the sensing instrument itself rather than the features recorded in the data.

Level 1A: This data has been corrected for variations in detectors across the sensor by applying equalization functions among the detectors, leveling the measurements made by the sensor. This radiometric correction includes absolute calibration coefficients which can then be used to convert the digital numbers into irradiance values.

Level 1B: The next step is to apply measurable corrections to the image to address systematic geometric distortions inherent in the acquisition of the image by some sensors. This level is not necessary for other sensors that don't suffer from such systematic geometric error. Also, note that level 0 data can not be recovered from 1B data.

Level 2A: These images have been systematically mapped into a standard cartographic map reference system. Such products are nominally referred to as being geo-referenced, but do not have a high level of accuracy.

Level 2B: To improve the spatial accuracy of an image, a more rigorous process involving considerable user input is required. Through the process of image rectification, an image analyst geo-registers the image by identifying specific points in the image that correspond to very well-defined geographic locations known as ground control points. With this processing completed, the image is geo-referenced accurately to the spatial resolution of the original data - in other words, limited only by the spatial resolution of the sensor - except in areas of high local topographic relief.

Level 3: In areas with a great deal of elevation relief, such as in mountainous areas, further corrections are required to obtain a more accurate spatial image. Level 3 products have gone through the process of orthorectification, which adjusts the image for distortions due to topographic relief, lens effects, and camera tilt. Level 3 data is uniform in scale and appropriate for use over large grid scales, such as in mosaics.

It is important to note that different systems are in place with different missions and data providers, for various reasons. Landsat 7 designated level 1G as products that are geo-rectified  with pixel values in sensor units, for example. DigitalGlobe has an extensive system for categorizing product levels, going from level 1B 'Basic' through level 2A 'Standard' up to various entities such as level 3F  (orthorectified imagery with a map accuracy representative fraction of 1:5000) and level Stereo OR2A 'Ortho-ready Standard' - a stereo pair that is ready for the consumer to perform orthorectification on according to their own processes and specifications.

Comments (0) Number of views (470) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

12345678910 Last

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

Authors

Authors

Authors

© 2015 Exelis Visual Information Solutions