The Energetic Elephant...

Author: Rebecca Lasica

Perhaps a descriptive subtitle might read: Energetic = Energy, and Elephant = the proverbial “elephant in the room”. What I’m getting at is a question on my mind that should seem obvious but is otherwise elusive: “Why on Earth (no pun intended) is remote sensing not used more pervasively in energy sectors to provide valuable surface feature information that could help solve problems, increase margins, and improve safety?”


Granted, that is a pretty bold statement, and slightly inaccurate in the sense that remote sensing technology has been used over the years to help with mineral identification, mapping, and generalized resource and operations planning. However, it seems as though several factors have driven the industry toward traditional solutions including sub-surface modeling with electromagnetic and gravimetric data rather than utilizing surface information derived from remote sensing data and technologies.

For years this more traditional approach has been the de-facto standard to many workflows, and with good reason. Low pixel resolution from available satellites has long been cited as one of the major limiting factors thus-far for extracting the scale and accuracy required from a data source for many applications. Some other limiting factors have been: landcover obstruction of relevant surface features, low spectral resolution thus limiting the ability to identify distinguishing surface features, short revisit rates impairing temporal analyses, and high cost for data with the spatial and spectral resolutions necessary for meaningful analysis.  

Increased availability to better data is becoming availableto address these shortcomings. For example, the recent launch of Sentinel 1 and access to FREESAR DATA will be a game-changer in the ability to measure and map land subsidence. The launch of micro-satellites by companies like Skybox Imaging will deliver very high revisit rates to enable temporal analyses, and the higher spectral resolution with Digital Globe’s WorldView 3 imagery will provide more spectral insight. These are only a few new datasources out there. I have not even mentioned the use of LiDAR to enable very high-resolution surface modeling, or the impending explosion of the UAS industry which promises to deliver spatial resolution beyond what we might have imagined.

My point is this: Perhaps it is time for the energy industries to take another look at remote sensing technologies – not only as an improved source of surface information, but to add a valuable data layer to existing analytics. Together these pieces tell a more complete story that enable an organization to manage resources and grow their bottom line.

By the way – I am at the #EsriPUG (Petroleum User Group) for the next few days. Stop by booth 115 to say hello. I’d love to hear what your plans are for using remote sensing technologies in your applications.

Comments (0) Number of views (66) Article rating: No rating

Categories: Imagery Speaks





The Astrolib's PCA Routine and ROI Creation

Author: Barrett Sather

Code libraries are a wonderful thing. They allow people from all over the world to share their code, and their ideas, with others. From file openers to complex algorithms, if there is a documented method for doing it, there is most likely code for it somewhere out there. Today, I'd like to focus on using the Astolib library in IDL along with the ENVI API to do some wicked cool analytics to generate Regions of Interest.

What is the Astrolib you ask? It is a large library of IDL routines that mostly pertain to astrophysics or astronomy. Some examples of the routines you will find in it are:

·        FXREADFTINFO: FITS file openers and header parsers

·        MOONPOS: Calculate the Right Ascension, Declination, and distance of the moon at a given date

·        FREBIN: Expand or contract an image while conserving flux

·        MATCH: Find the indices where the values of two vectors match.

The full list of routines can be found at:


One piece of code that has been added to the Astrolib is the PCA routine. This procedure takes in multiple variables, and performs Principle Component Analysis (Karhunen–Loève Transform) on them. This is an orthogonal transform on the data that ultimately decreases its dimensionality, making it easier to visualize correlations between variables. In an image, these correlations often appear as distinct materials in a scene when some sort of threshold is applied.

Below is an excerpt from a new piece of code used in the Exelis course: Extending ENVI with IDL. It shows how to use the PCA procedure in relation to an image, and then uses the ENVIROI function to create regions of interest after doing thresholding on the results. Of course to run the code you'll first need to get the most recent version of the Astrolib, located on NASA Goddard's website at:


Above is a screenshot. Copyable code is at the bottom.

PCA runs the mathematical algorithm for a Karhunen–Loève Transform, and therefore requires a group of single-dimensional variables as an input. To do this, this example uses the reform function in IDL before running PCA to get the image in to two dimensions - the first dimension being the bands, the second being a flattened spatial dimension. For this example, the bands of the image are the possibly correlated variables that will be run through PCA.

After PCA has run, the reform function is used again to get the image back in to three dimensions, or image space. This image is then thresholded between a value of .01 and .007 in an attempt to pull out pixels corresponding to sand. The blue band of the image is also thresholded at a value of 250 in order to exclude water pixels from the ROI.

This subject matter is covered more in depth in our Extending ENVI with IDL course. Basic programming experience is the only prerequisite. Information on this course as well as other ENVI courses can be found at:


Copyable code:

e = envi(/current)
 view = e.GetView()
  ;Open a file in the ENVI default distribution (as an example)
 file = filepath('qb_boulder_msi', root_dir=e.root_dir, subdirectory='data')
 msi = e.OpenRaster(file)
 layer = view.CreateLayer(msi)
 ;Subset the raster data, then reform in to 2D (A collection of variables)
 subset = msi.Subset(SUB_RECT=[500,450,724,599])
 data = subset.GetData()
 dims = size(data)
 array = reform(data, [dims[1]*dims[2], dims[3]])
 PCA, array, eigenval, eigenvect, percentages, proj_obj, proj_atr
 pca_data = reform(proj_obj, [dims[3], dims[1], dims[2]])
 sand_data = pca_data[1, *, *]
 mask = (sand_data lt .01) + (sand_data gt .007) + (data[*, *, 0] gt 250)
 ;Get the locations of the thesholded pixels, and add to an ROI
 sand_pixels = where(mask eq max(mask))
 xy_locations = array_indices(sand_data, sand_pixels)
 sand_roi = ENVIROI(NAME='Sand', COLOR='Orange')
 sand_roi.AddPixels, xy_locations[1:2, *], SPATIALREF=subset.spatialref
 sand_roiLayer = layer.AddRoi(sand_roi)

Comments (0) Number of views (117) Article rating: No rating

Categories: Imagery Speaks





Top Ten Weird Things I’ve Done While Working in Remote Sensing

Author: Amanda O'Connor

1. Found seals on Icebergs—they look big brown commas and feature extraction works pretty well.


2. Looked for deer in thermal infrared images,these were still images. The people wanted to find them because the deer were traffic hazards. I guess they were hoping the deer wouldn’t move. At that point in time, after collection, they had to drive the camera data to a lab to analyze it. By the time they returned, the deer weren’t there when they went to look for them...

3. Met the Freedom Rodeo Queen of Lawton, Oklahoma, and her attendant while collecting field spectra for a calibration experiment. Was referred to as “the attendant” the rest of the trip.

4. Observed catfish ponds for algal contamination that can result in “Off Flavor” catfish.

5. Fixed that one troublesome pixel in my vacation photos with ENVI Pixel editor.It’s a darn good thing I didn’t know that existed in grad school. Anyone who has dealt with data that’s as correlated as a shot-gun blast knows what I’m talking about.

6. Threatened a group of tusked pigs with an LAI2000 Plant Canopy Analyzer while on a ground truthing mission in Brazil to verify Landsat and EO-1’s ability to estimate fractional canopy cover. I was told very seriously to urinate on them should I get cornered. In case you didn't notice, my name is Amanda.

7. Told people my spectrometer was a GPS so they’d stop asking questions about why I had a butter churn and was walking around an airport tarmac (pre- 9/11). I was attempting to calibrate Landsat 5.

The other “attendant” with butter churn spectrometer

8. Spent time chasing AVIRIS—it’s not as romantic as it sounds.

9. Was taken to many welding shops, pawn shops, gunshops, fireworks stands, and junk yards by an account manager who once said, “I can turn half an hour early into 5 minutes late if you’re not careful”. After these interesting visits, I’d then sitdown and talk very seriously about ENVI/IDL and solving people’s problems with software, not about the items found at the aforementioned places.

10. Was told to degrade good imagery into bad imagery to see if bad imagery would work as well as good imagery.

Comments (0) Number of views (364) Article rating: 5.0

Categories: Imagery Speaks





Trying to Add Context to My Ski Vacation

Author: Mark Bowersox

Last week, my wife Kelle and I celebrated the engagement of two close friends during a backcountry ski trip to Francie's Cabin, a hut south of Breckenridge, CO. In the days before the trip, I had been listening to the book Age of Context by Robert Scoble and Shel Israel, who happen to be keynote speakers at this week's GEOINT conference. 

The book's premise is that 5 prominent elements of technology (mobile devices, social media, big data, sensors and location based services) are converging to transform user experiences in all areas of our lives. Companies can serve users better by knowing more about their environments like; where they are, who they are with, what they're doing, what safety risks are present, and how they feel. The goal is to predict things like; what they might do next, where they go, what are the new safety risks, and will they feel better or worse. Knowing these things ups the odds of delivering a satisfying solution or service.


So, I left Denver with the Age of Context on my mind. How would my use of technology and resulting user experience stack up against the Age of Context?

Drive to trailhead. The trailhead doesn't have an address, so everyone used some combination of an internet description and Google Maps to find the location. We left in separate cars from three separate locations using iPhones to check traffic conditions, get driving directions and coordinate status. We texted our locations (exits, mile markers, landmarks, etc.) and adjusted our paces to arrive at the trail head together. We arrived within 15 minutes of each other. 


In the Age of Context our mobile phones would integrate into our vehicle's navigation and media center. Our cars would be aware of each other via social networks and each party's location and status would be communicated in a joint operational picture on our dashboard. Additionally, our vehicles would engage 4 wheel drive before it was required and we'd know immediately if someone in our party was stuck in the snow. 

Hike to Hut. We started the hike to Francie's Cabin under sunny clear skies and heavy backpacks. About a quarter mile in we could take the short, hard route (steep), or a longer, easier route. We had previously decided the long easy route was the way to go based on a hardcopy US Topographic Map and Garmin GPS unit. But these technologies didn't have current conditions. Was there enough snow? Which route had the most shade (favors ski glide and skier thermoregulation)? How were other skiers on the trails feeling?


In the Age of Context, Satellite imagery would be streamed to our mobile devices and integrated with our GPS position to illustrate the snow coverage ahead on the trail. Info from the mobile devices of other recent travelers would report current conditions to the rest of us in the area. We might choose the harder route if snow was better, there was more shade, or we would get there quicker.

Engagement. Shortly after we arrived at the hut, Brandon convinced Johanne to head out again to see some local scenery. Unbeknownst to Johanne, Brandon would propose and we would ready the hut for a celebration. As with any surprise, everyone needs to be place and ready to yell when they walk through the door. We waited. We wondered. Some of us considered a nap, but were afraid to miss the action.   


If we were already in the Age of Context, our mobile devices would have set up a geofence to alert us when Brandon or Johanne returned to the hut. Nappers would automatically be awakened by alarm and would know exactly when to get in place with the champagne uncorked and the video rolling.  

Backcountry skiing. The next day the goal was to ascend to approximately 13,000 feet and ski a south/southeast facing slope to lower elevation and eventually back to the cabin. The number one goal was to safely navigate to the route, minimizing travel through avalanche prone terrain. One contributor to avalanche risk is the steepness of the slope. To identify these areas, we brought US Topographic Maps with colored overlays of the avalanche prone slope gradients. Brandon even had the slope factor overlay on his GPS unit - not too shabby!


The Age of Context skier would wear goggles that provide the slope overlay (and other factors) in their line of sight. The areas to avoid would appear in front of him as he scanned the landscape with the goggles. This 'augmented reality' view would be particularly useful for skiers who need to deviate from their planned route due to wind, unstable snow or other issues in search of a new, but safe, path to the descent. In the event of an avalanche, the goggle would switch to a search mode to quickly account for other members of the party.

Back to work. To some people these ideas may seemed far-fetched. To others, like the companies mentioned in Age of Context this type of user experience is right around the corner. Later today I'll attend Scoble and Israel's talk at GEOINT. Hope to see you there.

Comments (0) Number of views (278) Article rating: No rating

Categories: Imagery Speaks





GEOINT means exciting technology

Author: Rebecca Lasica

Beau Leeger, Manager of US Sales and Services at Exelis VIS, is guest blogging today about exciting technology that will be on display next week at GEOINT.

In just five days, GEOINT 2013* begins. The re-scheduling the 2013 edition of my favorite conference allowed for us to extend our cloud based, on-demand geospatial offerings with some potentially game-changing technology. For several years now, I have watched the development and excitement around the Ozone Widget Framework (OWF). To my delight, this technology was released to the general public in early 2013. We immediately went to work on using this flexible "widget" based technology to host components for on-demand geospatial data exploitation. The resulting client stack includes widgets for accessing catalogs and performing advanced geospatial exploitation using ENVI-powered tools. There is even a widget that allows for web-based viewing of point-clouds from LiDAR. Within the framework, a user can interactively build a dashboard that hosts a functional geospatial exploitation application that runs and accesses data within the cloud. The power to for anyone to build web-based, cloud-powered geospatial exploitation tools is now within reach.

I am most excited about the possibilities when these tools are hosted in a flexible, interconnected framework. The design intent of OWF was to bring the source of information from various agencies and contributors together to get a more complete view of a problem or situation. This original goal is now extended into the geospatial realm. The ability to bring all relevant data sources and exploitation together to solve difficult geospatial problems is within reach. Image scientists and researchers will have a framework to develop tools that can interoperate with tools developed by others. Analysts will be able to deploy these tools shortly after development to solve pressing time-critical problems. The future of cloud-powered, web/mobile-based geospatial exploitation is suddenly much brighter.

What do you think about this exciting development? Experience this with us at GEOINT and let us know how it fits into your visions and aspirations for the future of geospatial exploitation.

Comments (0) Number of views (314) Article rating: 5.0

Categories: Imagery Speaks


12345678910 Last











© 2014 Exelis Visual Information Solutions