Author: Adam O'Connor
When I start a project that involves geospatial context this is quite often the first question that pops into my head. When working in the field this question is most commonly answered through the use of a hand-held GPS device, smart phone with location services, etc. But the extent of my "working outside" these days seems to be limited to the walk from the parking lot into the airport terminal. Instead, this question most frequently arises when I am sitting at my desk using geospatial software and I open datasets for the first time but do not have prior knowledge of their exact geographic location or feature content.
In some cases there is a clue in the file/folder name, metadata or other project information I was given that provides me some geopositional context (especially if the data covers a populated area). More often I find myself working with a collection of multiple datasets and all I know is they are located in "California's central valley" or the "rain forest in Brazil". Another scenario where this question frequently arises is exploring data in a massive historical archive. Admittedly an enterprise-scale data cataloging and management system would be of great utility in this situation but that's a subject for another blog post.
For example, I recently encountered a folder containing multiple files named "LC81520282013319LGN00". What I know from the filename structure is it's a Landsat 8 OLI + TIRS scene, path 152, row 28, acquired in 2013 on the 319th day of the year. I'm sure some of you have the WRS path/row grid memorized but I don't have the foggiest clue where this dataset is located. What I really need is a way to quickly open the dataset, view the imagery and perform a quick visual interpretation of the features on screen.
When determining dataset geolocation on a global scale the use of contextual basemaps, orthoimagery, graticules or vector features with rich attributes can be extremely helpful. This technique is nothing new and most geospatial software packages have functionality to help users ascertain geopositional context including ENVI Classic's "Create World Boundaries" tool. However, we now have our new ENVI 5 application with its modern user interface and multiple layer display so we need to provide access to recent data with a more intuitive user experience.
As a product manager I receive a lot of feedback from our users and I have heard them struggle with global geopositional context when working in our software. Sure, we have a mouse cursor location display with geographic lat/lon or MGRS coordinates, but that doesn't really help visualize the geolocation of your data. Consequently, as part of the ENVI 5.1 project we added a variety of global datasets to the software installation including a Natural Earth global shaded relief image, Natural Earth global vectors (Shapefile format with attributes), and the GMTED2010 global digital elevation model. Furthermore, we also provided a convenient mechanism to open and display these datasets from the "File > Open World Data > ..." application submenu.
Using our new world data I was able to open the aforementioned Landsat 8 scene and in under a minute determine it covers the region of Lake Balkhash in the southeast corner of Kazakhstan (see screenshot below). Many people ask me what my favorite new feature is in our ENVI 5.1 release and while there is a ton of amazing functionality ranging from the Seamless Mosaic workflow to the completely new region-of-interest (ROI) framework I always come back to the world data since I use it almost every day. Last but not least, we also recently added keyboard shortcuts that you can customize (File > Shortcut Manager) and I've added my own "Basemap" shortcut (Ctrl + B) which opens and displays the global shaded relief image in 2 simple keystrokes!
Categories: Imagery Speaks
Author: Tom Jones
The cloud provides a ready-made infrastructure to store and distribute massive amounts of data. Apps perform the processing and the IT burden becomes a monthly service bill. Modernization is now as the cloud + apps are transforming data into knowledge. Question is, are the answers useful?
With dozens of major cloud service providers available, cloud space (storage, bandwidth, and computation) has already become a commodity and the apps hosted on the cloud and the magic behind them (analytics) are on a similar path.
Savvy companies and government programs are documenting their existing workflows (the time it takes to complete each step and the accuracy required for each) before moving to the cloud. Otherwise, how would they calculate their savings or return on investment? How could they demand better service or performance from their cloud and app providers?
To examine app performance, let’s first describe an app in terms of single-process or multi-process.
Single-process (where the accuracy of the output is a result of one process):
Multi-process (where the accuracy of the output relies on the cumulative accuracy of all the processes):
In the near-future, apps will be competing for usage across the clouds in some combination of cost + performance + ease of use.
For some, a free 80% solution might be good enough. For example, a single-process land cover classification algorithm that accurately identifies 80% of the pixels within an image as belonging to the correct class (grass, sand, rock, asphalt, etc) may be perfectly acceptable. The city planner who uses the app might miscalculate the impervious surface runoff for a new skatepark by a few square meters, but the planning committee and storm drains will never know the difference.
A six-step image processing chain that converts raw data feed into an image, corrects for weather and for haze artifacts, normalizes for terrain and geographic coordinate location, and then identifies features of interest to tell a warfighter when and where to act, is a much more complicated prospect. Each discrete solution derives its statistically acceptable 80% accurate result. Compound that accuracy however, as you would when duplicating a real-world GEOINT workflow and you erase all confidence you previously had in the quality and usefulness of your result.
In a GEOINT world, 26% accuracy puts a warfighter directly in a sniper’s line-of-sight and loss of life will destroy any cost/savings benefit gained by service-enabling a desktop workflow.
Modernization is happening. Apps of all types are coming to market serving-up a wide range of knowledge from historically-interesting to mission critical. Despite competition from free alternatives, exquisite technologies (magic that doesn’t run out at midnight) will continue to demand a premium because they deliver trusted results. Those who want and need the best performance and the most accurate results will invest in apps that harness time-tested and proven tradecraft analytics to provide useful knowledge.
Author: Kevin Wells
This past week, Defense Secretary Chuck Hagel submitted his recommendation to President Obama that the defense budget be reduced by an additional $75 billion over the next two years. We read that these cuts will shrink the Army to a size not seen since pre-World War II. Needless to say, this announcement comes as no surprise to anyone as it is a result the reduction of resources after more than a decade of war.
This new reality presents challenges that industry and government are all scrambling to adjust to. However, I feel that this also presents a great opportunity for those organizations that are listening closely to leadership within the defense and intelligence community. The need for cost-effective delivery of accurate, reliable and timely geospatial intelligence is not diminished. If anything, that need will be greater than it has ever been before.
I recently attended a briefing by Lieutenant General Mary Legere, Army Deputy Chief of Stat, G-2. General Legere spoke at great length about how the Army's challenges represent great opportunities for their industry partners. Some of the most critical issues facing the Army in the 21st century include:
Albert Einstein once said, “In the middle of difficulty lies opportunity”. Those organizations that can provide geospatial analytic products to the edge in a way that is cost efficient, interoperable standards-based, and intuitive will find that the changing landscape within the defense community offers many opportunities for growth and continued support for our nations warfighters.
Author: Barrett Sather
It is no secret that California is experiencing the worst drought it has seen in decades, and researchers have already begun to dig into the underlying cause; hats off to those folks. It is true though, that the more information available on a situation, the better equipped we will be to solve the problem at hand. It is an exciting day for the remote sensing community, as well as those researching the drought in California, as they are about to get another (more distant) perspective.
Today marks the launch of the Global Precipitation Measurement (GPM) Core Observatory satellite, which if things go well, will begin its journey into Earth's outer atmosphere and beyond to inhabit its new home. It will be the inaugural launch for an international satellite constellation with partners in the United States, Japan, India, and Europe. The sensor on board will be responsible for taking measurements of where, when, and how much precipitation falls around the globe. It will become an invaluable asset in understanding our climate, weather systems, as well as our most precious resource: water.
The thing that I'm most excited about though? The datasets, once they make it down here, are going to be distributed in HDF5. This format has been one of my favorites ever since working with them in the remote sensing department at CU Boulder. It not only organizes the data, it opens up options for direct access to the datasets that you are interested in. I can't wait to tear in to the new data with some code I put together:
I'd like to give Dave Huslander credit for helping me out with an initial code example created for opening SMAP files, which have a similar format that will be used by a satellite of the same constellation scheduled for launch in November.
IDL has robust commands to access HDF5 files, and has added a few new routines with the release of IDL 8.3. These are H5_GETDATA, H5_LIST,and H5_PUTDATA, which do exactly what you would expect from the name. I like them a lot better than the old routines used to access HDF4 files, and they area lot easier to use.
If GUIs are more your style, opening image data from these puppies in ENVI 5.1 is now supported with the HDF5 Browser. I got to do some work on the browser with Ben Foreback when I started at Exelis in engineering, so I might be biased, but I think it's the cat's meow. It opens any HDF5 file, and can display any two or three dimensional dataset in the interleave of your choice.
In the browser, as long as 2-dimensional datasets are the same size, you can merge them in to a multi-spectral raster. It's actually kind of fun messing with it - you can make all sorts of fun pictures (though some might question the practicality). Here's an image with longitude as the red band, latitude as the green band, and height as the blue band for an HDF5 format image over the United States:
From a quick inspection, you can see that this image was taken with South at the top. The high green values in the North are at the bottom of the screen, and the high red values in the East are to the left on the screen. The blue in the upper right corner is land near the California coast, and black is the ocean.
If you've never used HDF5 before I encourage you to give it a shot! It takes a bit to learn the format, but I know I wouldn't choose any other format for data ingest and export given the option.
Tags: GPM, HDF5, h5_getdata, hdf5 borwser
Author: Amanda O'Connor
The streams of data that are available these days are staggering. There are more and more ways to mine social media in order to find original and thought provoking metrics. In some ways, I see remote sensing as its own social media stream, like each pixel is tweeting something about the world. Is it green, blue, has it changed, does it influence its neighboring pixels? Groups of pixels can comprise an object and those objects also can exist, “Hey, I’m green grass” or change, “I was dry grass last week, but I’ve greened up.” With collections over time, this information can predictably evolve e.g. “I was dry grass, then I became vegetation, then I died, then I greened up again.” If we attempt to model this pixel, it would predictably die again. Each pixel, object, image has something to say—and the world of social media can have something to say about imagery. Geotagged pictures, tweets, weather information, political events, and policy changes can all be tagged as additional content to information contained in imagery. And with that information, the ability to model and predict about the world around us becomes infinite.
Imagery can be thought in the same unstructured terms as social media—it’s another piece of the puzzle that may or may not take a predefined shape. Think about being able to mine imagery with hash tags—the presence of certain objects cues a tag or a change in an area cues another kind of tag. There is an abundance of new high-resolution sources of imagery from SkyBox, Urthecast and Planet Labs—especially video components. But is all that data really interesting? Can data from these and other instruments be searched for only what is interesting and catalogued by that information? An extreme example of this could be for security analysis. Take the Empire State Building--images collected every day, a couple videos, and show people coming and going. The building itself is unchanging. The change a security analyst would be interested in would be the sudden appearance of any kind box larger than a square foot. Once that change occurs, suddenly the previous images have value for showing the change as opposed to before when they just showed consistency. When that change occurs, the change is tagged and communicated. The concept is similar to a person who tweets constantly that she, “is sitting on the couch.” It isn’t really interesting until “sitting on the couch” becomes “My house is on fire and it’s spreading to others #Boulder”.
I read an interesting article on offering masters in DataScience on GNIP’s blog http://blog.gnip.com/data-science-masters/. GNIP is an organization specializing in delivering APIs for social media feeds. In many ways remote sensing has tackled the difficulty of being a scientist studying phenomena like botany who needs math skills, computer science skills, and visualization skills in order to use all the tools available to a botanist. The remote sensing community has much to offer the world of big data by the nature of the disciple, but it must also be on the receptor end—integrating data from outside of itself. Community remote sensing, CRS, is nothing new. This Article from Annelie Schoenmaker defines CRS as, “Location technology that combines remote sensing with citizen science, social networks and crowd-sourcing to enhance the data obtained from traditional sources. It includes the collection, calibration,analysis, communication or application of remotely sensed information by these community means.” http://tinyurl.com/kdllvt5 At present, the remote sensing big data people and other big data people might mingle at a conference, but the connections are still being forged.
With data feeds from Twitter and other social media sources being mineable, the contextual information that can be discovered about imagery is infinite; it’s a matter of asking the right questions. What questions should remote sensing, earth science, and defense and intelligence be asking social media to enhance information that is already contained in imagery? Can this information be used to either predict what will happen next orto forensically understand what has just happened? Who is thinking about this in the remote sensing community? How can these connections be fostered? How does image analysis software need to integrate with social media feeds? If I come up with more thoughts, I’ll tweet about it. My request is that you do too. @asoconnor