17

Apr

2014

Top Ten Weird Things I’ve Done While Working in Remote Sensing

Author: Amanda O'Connor

1. Found seals on Icebergs—they look big brown commas and feature extraction works pretty well.

Seal

2. Looked for deer in thermal infrared images,these were still images. The people wanted to find them because the deer were traffic hazards. I guess they were hoping the deer wouldn’t move. At that point in time, after collection, they had to drive the camera data to a lab to analyze it. By the time they returned, the deer weren’t there when they went to look for them...

3. Met the Freedom Rodeo Queen of Lawton, Oklahoma, and her attendant while collecting field spectra for a calibration experiment. Was referred to as “the attendant” the rest of the trip.

4. Observed catfish ponds for algal contamination that can result in “Off Flavor” catfish.

5. Fixed that one troublesome pixel in my vacation photos with ENVI Pixel editor.It’s a darn good thing I didn’t know that existed in grad school. Anyone who has dealt with data that’s as correlated as a shot-gun blast knows what I’m talking about.

6. Threatened a group of tusked pigs with an LAI2000 Plant Canopy Analyzer while on a ground truthing mission in Brazil to verify Landsat and EO-1’s ability to estimate fractional canopy cover. I was told very seriously to urinate on them should I get cornered. In case you didn't notice, my name is Amanda.

7. Told people my spectrometer was a GPS so they’d stop asking questions about why I had a butter churn and was walking around an airport tarmac (pre- 9/11). I was attempting to calibrate Landsat 5.

The other “attendant” with butter churn spectrometer

8. Spent time chasing AVIRIS—it’s not as romantic as it sounds.

9. Was taken to many welding shops, pawn shops, gunshops, fireworks stands, and junk yards by an account manager who once said, “I can turn half an hour early into 5 minutes late if you’re not careful”. After these interesting visits, I’d then sitdown and talk very seriously about ENVI/IDL and solving people’s problems with software, not about the items found at the aforementioned places.

10. Was told to degrade good imagery into bad imagery to see if bad imagery would work as well as good imagery.

Comments (0) Number of views (302) Article rating: 5.0

Categories: Imagery Speaks

Tags:

15

Apr

2014

Trying to Add Context to My Ski Vacation

Author: Mark Bowersox

Last week, my wife Kelle and I celebrated the engagement of two close friends during a backcountry ski trip to Francie's Cabin, a hut south of Breckenridge, CO. In the days before the trip, I had been listening to the book Age of Context by Robert Scoble and Shel Israel, who happen to be keynote speakers at this week's GEOINT conference. 

The book's premise is that 5 prominent elements of technology (mobile devices, social media, big data, sensors and location based services) are converging to transform user experiences in all areas of our lives. Companies can serve users better by knowing more about their environments like; where they are, who they are with, what they're doing, what safety risks are present, and how they feel. The goal is to predict things like; what they might do next, where they go, what are the new safety risks, and will they feel better or worse. Knowing these things ups the odds of delivering a satisfying solution or service.

 

So, I left Denver with the Age of Context on my mind. How would my use of technology and resulting user experience stack up against the Age of Context?


Drive to trailhead. The trailhead doesn't have an address, so everyone used some combination of an internet description and Google Maps to find the location. We left in separate cars from three separate locations using iPhones to check traffic conditions, get driving directions and coordinate status. We texted our locations (exits, mile markers, landmarks, etc.) and adjusted our paces to arrive at the trail head together. We arrived within 15 minutes of each other. 

 

In the Age of Context our mobile phones would integrate into our vehicle's navigation and media center. Our cars would be aware of each other via social networks and each party's location and status would be communicated in a joint operational picture on our dashboard. Additionally, our vehicles would engage 4 wheel drive before it was required and we'd know immediately if someone in our party was stuck in the snow. 


Hike to Hut. We started the hike to Francie's Cabin under sunny clear skies and heavy backpacks. About a quarter mile in we could take the short, hard route (steep), or a longer, easier route. We had previously decided the long easy route was the way to go based on a hardcopy US Topographic Map and Garmin GPS unit. But these technologies didn't have current conditions. Was there enough snow? Which route had the most shade (favors ski glide and skier thermoregulation)? How were other skiers on the trails feeling?

 

In the Age of Context, Satellite imagery would be streamed to our mobile devices and integrated with our GPS position to illustrate the snow coverage ahead on the trail. Info from the mobile devices of other recent travelers would report current conditions to the rest of us in the area. We might choose the harder route if snow was better, there was more shade, or we would get there quicker.


Engagement. Shortly after we arrived at the hut, Brandon convinced Johanne to head out again to see some local scenery. Unbeknownst to Johanne, Brandon would propose and we would ready the hut for a celebration. As with any surprise, everyone needs to be place and ready to yell when they walk through the door. We waited. We wondered. Some of us considered a nap, but were afraid to miss the action.   

 

If we were already in the Age of Context, our mobile devices would have set up a geofence to alert us when Brandon or Johanne returned to the hut. Nappers would automatically be awakened by alarm and would know exactly when to get in place with the champagne uncorked and the video rolling.  


Backcountry skiing. The next day the goal was to ascend to approximately 13,000 feet and ski a south/southeast facing slope to lower elevation and eventually back to the cabin. The number one goal was to safely navigate to the route, minimizing travel through avalanche prone terrain. One contributor to avalanche risk is the steepness of the slope. To identify these areas, we brought US Topographic Maps with colored overlays of the avalanche prone slope gradients. Brandon even had the slope factor overlay on his GPS unit - not too shabby!

 

The Age of Context skier would wear goggles that provide the slope overlay (and other factors) in their line of sight. The areas to avoid would appear in front of him as he scanned the landscape with the goggles. This 'augmented reality' view would be particularly useful for skiers who need to deviate from their planned route due to wind, unstable snow or other issues in search of a new, but safe, path to the descent. In the event of an avalanche, the goggle would switch to a search mode to quickly account for other members of the party.


Back to work. To some people these ideas may seemed far-fetched. To others, like the companies mentioned in Age of Context this type of user experience is right around the corner. Later today I'll attend Scoble and Israel's talk at GEOINT. Hope to see you there.

Comments (0) Number of views (242) Article rating: No rating

Categories: Imagery Speaks

Tags:

10

Apr

2014

GEOINT means exciting technology

Author: Rebecca Lasica

Beau Leeger, Manager of US Sales and Services at Exelis VIS, is guest blogging today about exciting technology that will be on display next week at GEOINT.

In just five days, GEOINT 2013* begins. The re-scheduling the 2013 edition of my favorite conference allowed for us to extend our cloud based, on-demand geospatial offerings with some potentially game-changing technology. For several years now, I have watched the development and excitement around the Ozone Widget Framework (OWF). To my delight, this technology was released to the general public in early 2013. We immediately went to work on using this flexible "widget" based technology to host components for on-demand geospatial data exploitation. The resulting client stack includes widgets for accessing catalogs and performing advanced geospatial exploitation using ENVI-powered tools. There is even a widget that allows for web-based viewing of point-clouds from LiDAR. Within the framework, a user can interactively build a dashboard that hosts a functional geospatial exploitation application that runs and accesses data within the cloud. The power to for anyone to build web-based, cloud-powered geospatial exploitation tools is now within reach.

I am most excited about the possibilities when these tools are hosted in a flexible, interconnected framework. The design intent of OWF was to bring the source of information from various agencies and contributors together to get a more complete view of a problem or situation. This original goal is now extended into the geospatial realm. The ability to bring all relevant data sources and exploitation together to solve difficult geospatial problems is within reach. Image scientists and researchers will have a framework to develop tools that can interoperate with tools developed by others. Analysts will be able to deploy these tools shortly after development to solve pressing time-critical problems. The future of cloud-powered, web/mobile-based geospatial exploitation is suddenly much brighter.

What do you think about this exciting development? Experience this with us at GEOINT and let us know how it fits into your visions and aspirations for the future of geospatial exploitation.

Comments (0) Number of views (274) Article rating: 5.0

Categories: Imagery Speaks

Tags:

8

Apr

2014

Web-Enabled GIS for Image Processing

Author: Joe Peters



The use of the internet to consume and display geographic information has evolved rapidly over the past several decades. The first maps to be displayed over the internet were displayed as static graphic images in formats like GIF, JPEG, or PNG inside an HTML page. This first step in the use of the internet to display geographic information, while important, did not offer the kind of functionality that we have come to expect of today's web mapping applications. Today, we expect our maps to be interactive. We want the ability to zoom in and out. We want the ability to turn various layers on and off so that we can see exactly what we are looking for. The ability to do these things is something that we have not only come to expect, but something that we have come to rely on. Interactive web maps are on our desktops, on our tablets, on our phones and even mounted in the dashboards of our cars.


What's interesting is that for a lot of people that actually work in the geospatial realm, the use of web-enabled GIS in our personal lives might actually be outpacing what we do with it at work. Particularly in the field of remote sensing, I think that performing image analysis on a traditional desktop setup is still pretty much the norm. But I have a feeling this is all going to change pretty quickly. Some recent projects that I have been involved with here at Exelis VIS have opened my eyes to the possibilities of what the future of web-enabled image processing might look like. The advantages are clear. Web-enabled image processing allows users to take advantage of distributed data - meaning the data can be stored anywhere as long as you can access it on your network or over the web. Data might be sitting on your desktop, on a server within your network, or it might be sitting on a server on the other side of the world. You can use basemaps distributed by Esri® or other sources to display your data. You can pull in vector layers. You can catalog your data using a catalog such as Jagwire™. And using image processing capabilities, such as those found in ENVI Services Engine, you can run processing on whatever data you have access to. This "mashup" of data and data processing from a variety of sources is the future of GIS. What's exciting about this is that once you have configured your system, it's fairly easy to begin building custom web applications for displaying, processing, and sharing information derived from remotely-sensed data. The ability to quickly ingest, process, and disseminate valuable information to end-users is, in my opinion, what makes web-enabled image processing so exciting and a clear winner over traditional desktop image analysis methods.



Let's go ahead and take a quick look at a couple of examples of what image analysis in the cloud might look like. In the first example (shown below) a simple user interface has been built using JavaScript. A Landsat imagery service provided by Esri® is used to access full resolution multispectral Landsat imagery. This data can be queried and displayed on an Esri® basemap. Once an image has been selected, various image processing options can be run on the data using functionality available through ENVI Services Engine. In the screen captures below, the first image shows an example of querying the Landsat image service data catalog. The second image shows an example of performing a Normalized Difference Vegetation Index (NDVI) on the selected image. With the configuration shown below, when image analysis is performed, the output data is stored on a local server and a PNG file is created to display the result on the Esri® basemap.




In the next example, rather than accessing an imagery service, we are using a catalog to access data that resides on a local server. Access to the application is controlled by a password. Anyone who has been given permission to access the application can access it from any location through a web browser. In this example, the Esri® Javascript API is again used to provide a basemap. Users can browse data that is available in the catalog, or they have the option of uploading data from their computer. This data will be added to the catalog and other users will be able to access it as well. Once an image is selected, image analysis can be performed using ENVI Services Engine tasks.




The future of web-enabled image processing is looking bright. It's exciting to think about all of the applications for this type of technology. Just imagine how easy it would be to track changes to glaciers, monitor forest fires, or even keep track of changes during a natural disaster using this type of technology. With the availability of data rapidly increasing, web-enabled image processing presents a method for accessing data and performing analysis to get real-world answers to real-world problems in a quick and effective manner.

Comments (0) Number of views (428) Article rating: 5.0

Categories: Imagery Speaks

Tags:

3

Apr

2014

What do the Kentucky Derby and Remote Sensing Have In Common?

Author: Patrick Collins

ASPRS, of course!

Louisville, home of the Kentucky Derby, great food, and this year's 2014 ASPRS annual conference. Held at the Galt House hotel, this year's conference offered a great mix of all things remote sensing over a three-day agenda. It was also co-located with a Joint Agency for Commercial Imagery Evaluation(JACIE) conference, which created a nice atmosphere where some of the best and brightest minds from science and academia could meet.

Eyes in the Skies

One of the most apparent trends I noticed at this show was the amount of exhibitors that had something to do with Unmanned Aerial Systems,or UAS. With the pending approval for UAS to fly commercially, many folks are beginning to offer services surrounding these sensors. Several booths were showing off actual cameras, as well as the different ways to mount them onto aerial systems without disturbing the accuracy of the data collection.

Other companies had their own planes and were offering to fly project sites as needed while several small systems were available for purchase direct from the vendor. Along with this were the software platforms capable of handling, analyzing, and disseminating the large amounts of still and full motion data that these systems capture. Needless to say, the UAS industry is set for huge growth in the near future, and many of the companies at the ASPRS show seem poised to capture their fair share of the market.

ENVI User Group

We also held our annual ENVI user group at the show. It was a two-hour session that occurred during the show on Tuesday. We had a good turnout, and covered some of the great additions we've made to the ENVI product line. This included highlights in the latest release of ENVI 5.1 such as the new Regions of Interest and Mosaic tools, as well as info on what's new in ENVI LiDAR and SARScape.

 

We took a portion of the user group to highlight some of our uses within the UAS market space, including our ability to catalog and serve real-time full motion video, and our ability to process data from UAS's and extract information from the data.

Finally, we wrapped up the session with an update on our enterprise and cloud-based technology. We highlighted the performance increases we've seen in our latest release of ENVI Services Engine, and discussed the benefits of ENVI for ArcGIS®,which enables ENVI functionality across the Esri® platform using ArcGIS for Server.

Off to the Races

All in all, I think it was a great conference. While the location may have been somewhat prohibitive for some of the previous attendees, Louisville provided a nice backdrop for the conference. I think my favorite part of the weekend was the opening reception, which was held at the Kentucky Derby Museum at Churchill Downs.

 

Attendees were treated to a tour of the grounds, and were given the ability to walk right up to the track. During the tour we learned that a horse only has one chance to run the Derby, when they're three years old. This made me think of the growing UAS industry and the race that is about to begin with deregulation of commercial flying of these systems. All of these companies are poised and waiting for the starter's gun, ready to be the first one out of the gate in the hopes of winning the big prize!

Comments (1) Number of views (392) Article rating: 5.0

Categories: Imagery Speaks

Tags: ENVI, Esri, ASPRS, UAV, UAS

12345678910 Last

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

© 2014 Exelis Visual Information Solutions