Author: Barrett Sather
Code libraries are a wonderful thing. They allow people from all over the world to share their code, and their ideas, with others. From file openers to complex algorithms, if there is a documented method for doing it, there is most likely code for it somewhere out there. Today, I'd like to focus on using the Astolib library in IDL along with the ENVI API to do some wicked cool analytics to generate Regions of Interest.
What is the Astrolib you ask? It is a large library of IDL routines that mostly pertain to astrophysics or astronomy. Some examples of the routines you will find in it are:
· FXREAD, FTINFO: FITS file openers and header parsers
· MOONPOS: Calculate the Right Ascension, Declination, and distance of the moon at a given date
· FREBIN: Expand or contract an image while conserving flux
· MATCH: Find the indices where the values of two vectors match.
The full list of routines can be found at:
One piece of code that has been added to the Astrolib is the PCA routine. This procedure takes in multiple variables, and performs Principle Component Analysis (Karhunen–Loève Transform) on them. This is an orthogonal transform on the data that ultimately decreases its dimensionality, making it easier to visualize correlations between variables. In an image, these correlations often appear as distinct materials in a scene when some sort of threshold is applied.
Below is an excerpt from a new piece of code used in the Exelis course: Extending ENVI with IDL. It shows how to use the PCA procedure in relation to an image, and then uses the ENVIROI function to create regions of interest after doing thresholding on the results. Of course to run the code you'll first need to get the most recent version of the Astrolib, located on NASA Goddard's website at:
Above is a screenshot. Copyable code is at the bottom.
PCA runs the mathematical algorithm for a Karhunen–Loève Transform, and therefore requires a group of single-dimensional variables as an input. To do this, this example uses the reform function in IDL before running PCA to get the image in to two dimensions - the first dimension being the bands, the second being a flattened spatial dimension. For this example, the bands of the image are the possibly correlated variables that will be run through PCA.
After PCA has run, the reform function is used again to get the image back in to three dimensions, or image space. This image is then thresholded between a value of .01 and .007 in an attempt to pull out pixels corresponding to sand. The blue band of the image is also thresholded at a value of 250 in order to exclude water pixels from the ROI.
This subject matter is covered more in depth in our Extending ENVI with IDL course. Basic programming experience is the only prerequisite. Information on this course as well as other ENVI courses can be found at:
e = envi(/current)
view = e.GetView()
;Open a file in the ENVI default distribution (as an example)
file = filepath('qb_boulder_msi', root_dir=e.root_dir, subdirectory='data')
msi = e.OpenRaster(file)
layer = view.CreateLayer(msi)
;Subset the raster data, then reform in to 2D (A collection of variables)
subset = msi.Subset(SUB_RECT=[500,450,724,599])
data = subset.GetData()
dims = size(data)
array = reform(data, [dims*dims, dims])
PCA, array, eigenval, eigenvect, percentages, proj_obj, proj_atr
pca_data = reform(proj_obj, [dims, dims, dims])
sand_data = pca_data[1, *, *]
mask = (sand_data lt .01) + (sand_data gt .007) + (data[*, *, 0] gt 250)
;Get the locations of the thesholded pixels, and add to an ROI
sand_pixels = where(mask eq max(mask))
xy_locations = array_indices(sand_data, sand_pixels)
sand_roi = ENVIROI(NAME='Sand', COLOR='Orange')
sand_roi.AddPixels, xy_locations[1:2, *], SPATIALREF=subset.spatialref
sand_roiLayer = layer.AddRoi(sand_roi)
Categories: Imagery Speaks
Author: Amanda O'Connor
1. Found seals on Icebergs—they look big brown commas and feature extraction works pretty well.
2. Looked for deer in thermal infrared images,these were still images. The people wanted to find them because the deer were traffic hazards. I guess they were hoping the deer wouldn’t move. At that point in time, after collection, they had to drive the camera data to a lab to analyze it. By the time they returned, the deer weren’t there when they went to look for them...
3. Met the Freedom Rodeo Queen of Lawton, Oklahoma, and her attendant while collecting field spectra for a calibration experiment. Was referred to as “the attendant” the rest of the trip.
4. Observed catfish ponds for algal contamination that can result in “Off Flavor” catfish.
5. Fixed that one troublesome pixel in my vacation photos with ENVI Pixel editor.It’s a darn good thing I didn’t know that existed in grad school. Anyone who has dealt with data that’s as correlated as a shot-gun blast knows what I’m talking about.
6. Threatened a group of tusked pigs with an LAI2000 Plant Canopy Analyzer while on a ground truthing mission in Brazil to verify Landsat and EO-1’s ability to estimate fractional canopy cover. I was told very seriously to urinate on them should I get cornered. In case you didn't notice, my name is Amanda.
7. Told people my spectrometer was a GPS so they’d stop asking questions about why I had a butter churn and was walking around an airport tarmac (pre- 9/11). I was attempting to calibrate Landsat 5.
The other “attendant” with butter churn spectrometer
8. Spent time chasing AVIRIS—it’s not as romantic as it sounds.
9. Was taken to many welding shops, pawn shops, gunshops, fireworks stands, and junk yards by an account manager who once said, “I can turn half an hour early into 5 minutes late if you’re not careful”. After these interesting visits, I’d then sitdown and talk very seriously about ENVI/IDL and solving people’s problems with software, not about the items found at the aforementioned places.
10. Was told to degrade good imagery into bad imagery to see if bad imagery would work as well as good imagery.
Author: Mark Bowersox
Last week, my wife Kelle and I celebrated the engagement of two close friends during a backcountry ski trip to Francie's Cabin, a hut south of Breckenridge, CO. In the days before the trip, I had been listening to the book Age of Context by Robert Scoble and Shel Israel, who happen to be keynote speakers at this week's GEOINT conference.
The book's premise is that 5 prominent elements of technology (mobile devices, social media, big data, sensors and location based services) are converging to transform user experiences in all areas of our lives. Companies can serve users better by knowing more about their environments like; where they are, who they are with, what they're doing, what safety risks are present, and how they feel. The goal is to predict things like; what they might do next, where they go, what are the new safety risks, and will they feel better or worse. Knowing these things ups the odds of delivering a satisfying solution or service.
So, I left Denver with the Age of Context on my mind. How would my use of technology and resulting user experience stack up against the Age of Context?
Drive to trailhead. The trailhead doesn't have an address, so everyone used some combination of an internet description and Google Maps to find the location. We left in separate cars from three separate locations using iPhones to check traffic conditions, get driving directions and coordinate status. We texted our locations (exits, mile markers, landmarks, etc.) and adjusted our paces to arrive at the trail head together. We arrived within 15 minutes of each other.
In the Age of Context our mobile phones would integrate into our vehicle's navigation and media center. Our cars would be aware of each other via social networks and each party's location and status would be communicated in a joint operational picture on our dashboard. Additionally, our vehicles would engage 4 wheel drive before it was required and we'd know immediately if someone in our party was stuck in the snow.
Hike to Hut. We started the hike to Francie's Cabin under sunny clear skies and heavy backpacks. About a quarter mile in we could take the short, hard route (steep), or a longer, easier route. We had previously decided the long easy route was the way to go based on a hardcopy US Topographic Map and Garmin GPS unit. But these technologies didn't have current conditions. Was there enough snow? Which route had the most shade (favors ski glide and skier thermoregulation)? How were other skiers on the trails feeling?
In the Age of Context, Satellite imagery would be streamed to our mobile devices and integrated with our GPS position to illustrate the snow coverage ahead on the trail. Info from the mobile devices of other recent travelers would report current conditions to the rest of us in the area. We might choose the harder route if snow was better, there was more shade, or we would get there quicker.
Engagement. Shortly after we arrived at the hut, Brandon convinced Johanne to head out again to see some local scenery. Unbeknownst to Johanne, Brandon would propose and we would ready the hut for a celebration. As with any surprise, everyone needs to be place and ready to yell when they walk through the door. We waited. We wondered. Some of us considered a nap, but were afraid to miss the action.
If we were already in the Age of Context, our mobile devices would have set up a geofence to alert us when Brandon or Johanne returned to the hut. Nappers would automatically be awakened by alarm and would know exactly when to get in place with the champagne uncorked and the video rolling.
Backcountry skiing. The next day the goal was to ascend to approximately 13,000 feet and ski a south/southeast facing slope to lower elevation and eventually back to the cabin. The number one goal was to safely navigate to the route, minimizing travel through avalanche prone terrain. One contributor to avalanche risk is the steepness of the slope. To identify these areas, we brought US Topographic Maps with colored overlays of the avalanche prone slope gradients. Brandon even had the slope factor overlay on his GPS unit - not too shabby!
The Age of Context skier would wear goggles that provide the slope overlay (and other factors) in their line of sight. The areas to avoid would appear in front of him as he scanned the landscape with the goggles. This 'augmented reality' view would be particularly useful for skiers who need to deviate from their planned route due to wind, unstable snow or other issues in search of a new, but safe, path to the descent. In the event of an avalanche, the goggle would switch to a search mode to quickly account for other members of the party.
Back to work. To some people these ideas may seemed far-fetched. To others, like the companies mentioned in Age of Context this type of user experience is right around the corner. Later today I'll attend Scoble and Israel's talk at GEOINT. Hope to see you there.
Author: Rebecca Lasica
Beau Leeger, Manager of US Sales and Services at Exelis VIS, is guest blogging today about exciting technology that will be on display next week at GEOINT.
In just five days, GEOINT 2013* begins. The re-scheduling the 2013 edition of my favorite conference allowed for us to extend our cloud based, on-demand geospatial offerings with some potentially game-changing technology. For several years now, I have watched the development and excitement around the Ozone Widget Framework (OWF). To my delight, this technology was released to the general public in early 2013. We immediately went to work on using this flexible "widget" based technology to host components for on-demand geospatial data exploitation. The resulting client stack includes widgets for accessing catalogs and performing advanced geospatial exploitation using ENVI-powered tools. There is even a widget that allows for web-based viewing of point-clouds from LiDAR. Within the framework, a user can interactively build a dashboard that hosts a functional geospatial exploitation application that runs and accesses data within the cloud. The power to for anyone to build web-based, cloud-powered geospatial exploitation tools is now within reach.
I am most excited about the possibilities when these tools are hosted in a flexible, interconnected framework. The design intent of OWF was to bring the source of information from various agencies and contributors together to get a more complete view of a problem or situation. This original goal is now extended into the geospatial realm. The ability to bring all relevant data sources and exploitation together to solve difficult geospatial problems is within reach. Image scientists and researchers will have a framework to develop tools that can interoperate with tools developed by others. Analysts will be able to deploy these tools shortly after development to solve pressing time-critical problems. The future of cloud-powered, web/mobile-based geospatial exploitation is suddenly much brighter.
What do you think about this exciting development? Experience this with us at GEOINT and let us know how it fits into your visions and aspirations for the future of geospatial exploitation.
Author: Joe Peters
The use of the internet to consume and display geographic information has evolved rapidly over the past several decades. The first maps to be displayed over the internet were displayed as static graphic images in formats like GIF, JPEG, or PNG inside an HTML page. This first step in the use of the internet to display geographic information, while important, did not offer the kind of functionality that we have come to expect of today's web mapping applications. Today, we expect our maps to be interactive. We want the ability to zoom in and out. We want the ability to turn various layers on and off so that we can see exactly what we are looking for. The ability to do these things is something that we have not only come to expect, but something that we have come to rely on. Interactive web maps are on our desktops, on our tablets, on our phones and even mounted in the dashboards of our cars.
What's interesting is that for a lot of people that actually work in the geospatial realm, the use of web-enabled GIS in our personal lives might actually be outpacing what we do with it at work. Particularly in the field of remote sensing, I think that performing image analysis on a traditional desktop setup is still pretty much the norm. But I have a feeling this is all going to change pretty quickly. Some recent projects that I have been involved with here at Exelis VIS have opened my eyes to the possibilities of what the future of web-enabled image processing might look like. The advantages are clear. Web-enabled image processing allows users to take advantage of distributed data - meaning the data can be stored anywhere as long as you can access it on your network or over the web. Data might be sitting on your desktop, on a server within your network, or it might be sitting on a server on the other side of the world. You can use basemaps distributed by Esri® or other sources to display your data. You can pull in vector layers. You can catalog your data using a catalog such as Jagwire™. And using image processing capabilities, such as those found in ENVI Services Engine, you can run processing on whatever data you have access to. This "mashup" of data and data processing from a variety of sources is the future of GIS. What's exciting about this is that once you have configured your system, it's fairly easy to begin building custom web applications for displaying, processing, and sharing information derived from remotely-sensed data. The ability to quickly ingest, process, and disseminate valuable information to end-users is, in my opinion, what makes web-enabled image processing so exciting and a clear winner over traditional desktop image analysis methods.
The future of web-enabled image processing is looking bright. It's exciting to think about all of the applications for this type of technology. Just imagine how easy it would be to track changes to glaciers, monitor forest fires, or even keep track of changes during a natural disaster using this type of technology. With the availability of data rapidly increasing, web-enabled image processing presents a method for accessing data and performing analysis to get real-world answers to real-world problems in a quick and effective manner.