Author: Rebecca Lasica
Maybe I’m an optimist making a bold statement. But I truly do believe that technologies that were previously complex have made their way to the forefront of ease and usability. I’m talking about Easy Buttons. And now is a better time than ever to generate and incorporate easy-button processing into your workflows! There are really two different ways to implement the easy button. Let’s explore them below:
1. Add an Easy Button to your Desktop Interface: Adding a button to your ENVI Desktop is really quite simple. Just open IDL and selectFile > New ENVI Extension. Wrapper code is automatically generated for you and comments direct you exactly whereto place your processing code:
Let’s not skim over the processing code as that’s the place where the most intimidation occurs. The new ENVI Task API makes it easier than ever to string together processing tasks into a custom reusable workflow. For example, say you want to perform radiometric calibration, dark subtraction atmospheric correction, and then isodata classification. You would use the RadiometricCalibration, DarkSubtractionCorrection, and ISODATAClassification tasks respectively. Your code would look something like this:
Once your code is in place, you just need to build your project, restart ENVI, and you are on your way!
2. The other way to implement easy buttons are in the enterprise. Similar to generating a New ENVI Extension – you can click a button to generate a New ENVI Task. This generates a place where you drop your processing code, as well as a JSON companion file describing your inputs,outputs, and parameters.
When you are finished - simply zip up your files and upload them to your ESE installation. Scroll up to see a screen shotof the Isodata Classification Easy Button. Contact me for a live demonstration.
I’d love to hear what Easy Buttons you hope to implement this year!
Categories: ENVI Blog | Imagery Speaks
Author: Adam O'Connor
Recently I've been interested in the utilization of multispectral imagery acquired in the SWIR and LWIR wavelength regions when analyzing natural disasters such as forest fires. Obviously the thermal properties captured in the LWIR wavelengths help identify hotspots and distinguish high (colder) clouds from low (warmer) smoke. Furthermore, light in the SWIR wavelength region can penetrate haze and certain types of smoke. Consequently, SWIR-based imaging can provide the ability to "see through" smoke to better analyze the active portion of a forest fire and identify hotspots.
There's a wide variety of scenarios where remote sensing analysis can help understand a wildfire ranging from during-fire disaster response support to post-fire forensic analysis. Obviously in the during-fire scenario it is absolutely critical to acquire imagery, derive information and get it into the hands of the personnel fighting the fire as soon as possible. Active forest fire analysis is where the airborne ISR systems and services provided by companies such as Range and Bearing excel since imagery of a wildfire begins to lose its usefulness immediately after being acquired. Sensor platforms such as WorldView-3 also provide multispectral imagery covering the SWIR wavelengths which can provide during-fire intelligence (depending upon data availability timeframe) and post-fire forensic analysis as discussed in DigitalGlobe's recent blog post:
Revealing the Hidden World with Shortwave Infrared (SWIR) Imagery
Since I do not have access to RAB or WV-3 data of a wildfire at this time I decided to see if I could find a Landsat 8 scene for one of the numerous wildfires that occurred in 2014. The Landsat 8 OLI/TIR sensor platform and availability of the data for free on USGS EarthExplorer never ceases to amaze me as I was able to find a relatively cloud-free scene of the Chelaslie River fire in British Columbia acquired on 03 Aug 2014. Here is a screenshot of a simple RGB band combination image from this dataset:
Author: Matt Hallas
This blog continues our discussion on the Spectral Hourglass Workflow available within ENVI.
After reducing the number of spectral bands our next step involves reducing the data spatially so that we only focus on pixels that are pure. We want our workflow to be as efficient as possible and ignoring pixels which do not contain pure endmembers will aide this effort. In order to accomplish this task, we can use ENVI to create a Pixel Purity Index (PPI) Image.
The Pixel Purity Image means this – each pixel value corresponds to the number of times that pixel was recorded as extreme and further, the general purpose of the PPI image is to associate spatial information (pixel locations) with the probability that each pixel represents a pure image endmember.
The ‘DN’ (outlined in the red box) that you see represents a specific pixel value, and the ‘Npts’ (blue box) values that you see represent the total number of times that specific pixel value was identified/found in the image. A high PPI value means it has been an endmember in more iterations.
The goal of this step is to identify pixel values that did not occur with extreme frequency, because the endmembers (pure pixels) will most likely only be represented by a few pixels. As is the nature of a typical hyperspectral scene you will have mainly mixed pixels, but there will be a small number of endmembers (pure pixels) that can be extracted to map their frequency in the image. Now, this is a general rule of thumb and will vary greatly depending on the quality of HSI data you receive, along with the general area where the data is collected from. For instance, if you are in a mineral rich area with large outcroppings of pure minerals, then you would most likely have a large frequency of endmembers as there is not much mixing of minerals in that region.
Choosing a threshold allows you to choose the point at which you deem something to truly be spectrally unique or not. The smaller the threshold value you choose the fewer pixels will be identified as being pure (i.e., pixels will have to be more pure in order to project onto the tail of the histogram).
An appropriate image threshold must be determined empirically. This requires some trial and error. From the histogram above, you can see that the PPI image has a minimum value of 0 (pixels that were never identified as pure) and a maximum value of 61725 (pixels that were identified 61725 times during the PPI iterations). You can select a starting point for the threshold value by selecting a value near the break in slope (maximum curvature) of the input histogram, and in our case this would be around 1000.
Using that value of 1000 we can apply a band threshold to the PPI image that will now only include pixels in the image with high pixel values and low frequency; in other words we will only be left with spectrally pure endmembers. Determining the exact PPI threshold value is made very easy with the new ROI Tool. When selecting a threshold the ENVI display window will dynamically update to colorize the pixels which contain the range of values stipulated in the Choose Threshold Parameters dialog window (the red pixels in the attached figure). The ROI Tool dialog also updates with area information to detail how many pixels are contained within this newly created ROI. Due to the nature of HSI data and pure endmembers we know that the total number of pixels contained in the ROI should be no more than a few hundred pixels. Using a threshold value of 1400 we will be left with only 534 pixels, a reasonable number in our case.
Once a threshold has been applied to the PPI image the next step will be to extract endmembers via the n-D Visualizer tool available in ENVI. Check our website again in about 8 weeks to see how this step is handled, and as always, if you have any questions please feel free to shoot me an email at Matt.Hallas@exelisinc.com.
Tags: ENVI, GIS, VIS, Exelis, News, Environmental Management, Remote Sensing, Image Analysis, Image Processing, Spectral Analysis, geospatial imagery, hyperspectral imagery, geospatial, environmental monitoring, data processing, Visualization, data analysis, Hyperspectral, earth observation, Absorption Features, 5.2
Author: Peter DeCurtins
Little noted during the holiday season was the passing of the 50th anniversary of the first flight (on December 22, 1964) of the remarkable Lockheed SR-71, the fastest air-breathing manned aircraft in history. It's almost hard to conceive of how advanced and ahead of its time the "Blackbird" was, especially considering that it has already been sixteen years since NASA retired the last one from the flight line.
Photo: U.S. Air Force photo by Tech. Sgt. Michael Haggerty/Public Domain
During the 1950s, Lockheed's famous Skunk Works had developed the high-flying but relatively slow U-2 to perform reconnaissance missions for the Central Intelligence Agency. After a U-2 piloted by Francis Gary Powers was shot down over the Soviet Union in 1960, the CIA returned to Lockheed and renowned aircraft designer Kelly Johnson with a request to come up with something that would be effectively invulnerable to the weapons of the era. After a relatively short period of time, the innovative A-12 had been developed. It would go on to provide the conceptual design basis for the SR-71.
SR-71 Assembly line at the Skunk Works Photo: CIA/Public Domain
Capable of velocity in excess of three times the speed of sound and cruising at altitudes greater than 85,000 feet, the SR-71 was operated in service between 1966 and 1998 by the United States Air Force to perform a strategic reconnaissance role, and between 1992 and 1999 by NASA as a high-altitude research platform. It was flown by a flight crew of two seated in tandem cockpits, with the pilot forward and the "Reconnaissance Systems Officer" monitoring and operating the sensor and electronic systems from the rear cockpit. The vehicle carried electronic countermeasures and implemented early attempts at stealth design to minimize its radar cross-section and evade interception, but its principle defense was simply the high speed and cruising altitude that it operated at. Many times it accelerated away from Surface-to-Air Missiles (SAMs) that had been fired at it. No SR-71 was ever shot down.
All sensors carried by the SR-71 were located either in the nose or in bays housed within the fuselage side elements known as chines. The nose section was detachable in order that the vehicle could be quickly equipped with any one of several noses: an Optical Bar Camera, a nose containing either a Goodyear or Loral ground mapping radar, or an Advanced Synthetic Aperture Radar (ASARS I). Originally the chine bays housed the Operational Objective Cameras made by Hycon. These cameras had a 13-inch focal length and used 9x9 inch film. The OOCs were replaced in the early 1970s the Technical Objective Cameras, manufactured by the Itek Corporation with focal lengths of 36, 48, and eventually 66 inches. The chine bays also housed a number of SIGINT recorders to capture the electronic signature of search radars and SAM systems as it flew overhead.
NASA recognized the value of the SR-71 as a testbed vehicle for high speed, high altitude aeronautical research. Operating from a base at NASA's Dryden Flight Research Center, the aircraft could provide ideal environmental characteristics for a variety of research and experimentation in a variety of areas including aerodynamics, thermal protection, propulsion and atmospheric disciplines. NASA flew a series of flights using the SR-71 as a science camera platform, for example using an upward-looking ultraviolet sensor loaded into the nose bay to observe a number of celestial objects in the UV spectrum unavailable to ground-based systems.
Photo: Judson Brohmer/USAF - NASA Website/Public Domain
The SR-71 had expensive operating costs, and all of the remaining aircraft have long been retired and dispersed to different museums. Modern reconnaissance satellites carry much of the strategic reconnaissance load formerly shouldered by the SR-71, but the orbital characteristics of most satellites do not provide the flexibility to perform urgent reconnaissance tasks within short time windows. We now live in an age in which the use of unmanned aerial drones is exploding, especially in the area of reconnaissance. However, there may come a day when we see a true successor to this superlative airplane. Whatever form that may take, it's hard to imagine anything after fifty years in retrospect representing as big of a leap forward as the Blackbird does today.
Author: Amanda O'Connor
Well another year has come and gone and hover boards seem only marginally closer. Now is the time for people making lists and recalling moments/favorites/trends of the past year, so here are my top 5 from 2014.
1) Cloud computing for remote sensing is getting closer. In the stove-piped world of remote sensing, there are so many data silos it’s staggering. Some organizations are really working to break these down, but by and large I still get data via FTP or DropBox. I’m not processing that data in a cloud architecture, but on my desktop. The ENVI task architecture makes it easier to deploy ENVI capabilities as web services and apps. There are some exciting developments coming in 2015 to make this more of a reality for ENVI and IDL customers, so stay tuned.
ENVI’s change detection and aggregation deployed as a web service on the Landsat data archive
2) Big data processing – whether it’s adding GPUs, CPUs, or Amazon Extra Large instances, processing and getting results from entire archives of data is truly possible and is happening. Not long ago the art of the possible was many GB images, now it’s 1000s of terabytes if an application smartly takes advantage of resources. And it doesn't have to be a super computer that fills many racks and rooms. If the disk space for the data is there, the pipes are big enough to move it around, the processing capacity is there –this has a huge impact on high resolution imagery projects at a global scale.
3) LiDAR feature extraction. My motto is now if you can see it in your LiDAR data, either ground based or airborne, it’s extractable. 3-D feature extraction has huge potential in agriculture, urban planning, corridor mapping (electrical, transportation, etc.), non-destructive testing, and a myriad of other uses. There are tools out of the box in ENVI LiDAR for trees, power lines, and buildings, but the sky is the limit once you get IDL involved.
4) UAS/UAVs – on the cusp of greatness. Integrating GPS inertial measurement units,connecting that to the camera time, fitting the camera/sensor, calibrating, testing, the actual flight mechanics, and being able to correct the data that is ultimately collected is a big task that can have really beneficial outcomes. Our professional services group is currently working with clients currently who need end-to-end pre-processing workflows for UAS. We’re finding that each system and requirement is unique, and while all the tools are there, working with the Exelis VIS team that include photogrammetrists and IDL/ENVI experts can often save costs verses the trial and error of DIY.
5) More bands, more data, more pixels. Digital Globe’s WorldView 3 is up and running, promising 27 bands in the VIS, SWIR, and SWIR II. Landsat 8 has fantastic and free world coverage. The World DEM from Airbus DS is the best I’ve seen and can compliment any imagery collection. NASA’s Soil Moisture Active Passive (SMAP) Sensor, which will study hydrology, is set to launch January 2015. Microsats like Skybox and Planet Labs Flock1 are on orbit and collecting data. There’s imagery for every price range and every purpose and the great thing is you don’t have to be a remote sensing expert to use these datasets. ENVI can do much of the heavy lifting and with ENVI web services, Exelis VIS can write them, or you can have your in house expert write them and then use at any technical skill level in your enterprise. Truly remote sensing for the masses.
I’m at American Meteorological Society meeting in Phoenix this week, so if you’re there, say hello!@asoconnor