21

Oct

2014

Creating a Custom Three-Dimensional Visualization with ENVI + IDL

Author: Joe Peters

This past week I decided to take some time to familiarize myself with some of the three-dimensional visualization tools available in ENVI + IDL. Within the ENVI user interface, users can very quickly build a three-dimensional visualization of a scene by using the 3D SurfaceView tool which is available in the ENVI Toolbox. This is a great tool and offers a number of handy surface and motion controls for customizing a three-dimensional visualization. I have worked with this tool quite a bit over the years, but had always been curious about what could be done with a three-dimensional visualization if I leveraged the power of IDL. IDL has a couple of three-dimensional functions that are particularly powerful and allow users to create fully customized three-dimensional visualizations. What I learned is that these functions allow users a tremendous amount of flexibility. In the example I discuss below, I used the CONTOUR function. The SURFACE function also offers some good options for creating three-dimensional surfaces in ENVI + IDL.

For this example, I downloaded a couple of 1/3 arc second USGS DEMs from the National Map Viewer. I first mosaicked the two DEMs and then resampled them slightly to decrease the file size so that they would be more performant in IDL. I then wrote the code which produces the three-dimensional representation. The code opens the DEM file, extracts information about lat/lon extent and pixel size, then draws a grid upon which elevation values can be plotted. Once I figured out how to get this to work, I chose an IDL color table to display elevation ranges, added contour lines, and made some fine-tuning adjustments to the axes labels. There are tons of different adjustments that can be made, so building a custom three-dimensional visualization really does allow users to use their imagination. For reference, my code is shown below:

The nice thing about this code is that replicating this, or something similar, with other DEMs should be fairly easy. All it would require would be to change a couple of lines of code to match the extent of the new image and make some adjustments to the axes. The result of running this code on my DEM is shown below.

There are a lot of pretty cool things that can be done with the three-dimensional functions available in ENVI + IDL. For instance, in this example I plotted elevation values using a USGS DEM, but values that represent something completely different than elevation could have been plotted along the z-axis. This could make for some pretty cool and informative visualizations.

I also decided to make a map of my area using ENVI and ArcMap interoperability. I then inserted an image of my three-dimensional plot into the map. I think it gives a unique view of the scene. You can check out my map below. If you would like to see more examples of what can be done with the SURFACE and CONTOUR functions, check out our Documentation Center for more Graphics Examples.

Comments (0) Number of views (284) Article rating: 5.0

17

Oct

2014

Scalable Image Analysis for Tomorrow and Beyond

Author: Rebecca Lasica

Live in the now or plan for tomorrow? Aren’t we often told to do both? I’ve been thinking quite a bit about the moment and the future as they relate to several topics on my front burner lately. As we released ENVI 5.2 this week – these thoughts are largely relevant. New technology to offer a migration path from the desktop to the cloud is here, as are tools for spatio-temporal analysis and full motion video. It seems as though many aspects of image analytics are changing at once. So it seems appropriate to focus my blog this week on the migration path itself and how some of these technologies are positioning businesses to leap into the future. Here are some related questions I have entertained recently: 


  • What can I do in the cloud that I can’t do at the desktop? Or alternatively – can I do everything I can do at the desktop inthe cloud?  This is probably the single most often asked question lately. The answer is largely – it depends. For the most part, yes - ENVI analytics you enjoy today can be accessed via API that enables cloud processing. But digging a bit deeper one should most definitely look at the new ENVI Task. These tasks take powerful analytics that were already available –and expose them in a new – and in my opinion – much easier paradigm to implement. What that means? Prototypes that used to take me an hour or two to write now take me 10’s of minutes. I’m sure you will notice the same. If you don’t – please give me a call.

 

  • What exactly is time-enabled data and what can I do with it that is new? I love this question because I am so excited about spatio-temporal analysis. The time enabled data I have seen come across my desk lately tend to come in three different forms. The first is the obvious – data that have time metadata enabling one to sort through an image collection chronologically. Visualizing information over a period of time is a powerful tool. Think about watching a field go from planting to maturity, or think about looking at the speed at which a flood can wreak havoc over a mountainside.  Another less obvious capability accessible with spatio-temporal analysis tools is the ability to sort information in and order you wish. Animating through a data collection in a certain order can shed light information that might otherwise be missed. For example, imagine the ability to take several non-chronological frames over an area of interest and animate through the frames in sequence – perhaps missing irrelevant information in-between.  With this capability, situational awareness takes on a new meaning.

 

  • And finally, one of the most popular uses of animating through a stack of data is to look at analysis products. For example – periodic MODIS temperature data can be analyzed to derive draught conditions over a particular area. This can be done with every platform revisit– in this case every 8 days. Viewing information such as draught conditions, vegetation health, water indices, or burn inform

Comments (0) Number of views (240) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

14

Oct

2014

Leveraging the Spatial, Spectral, and Temporal Value of a New WorldView

Author: Patrick Collins

A lot of people may not know it, but Exelis Inc. has designed the optical components for every single satellite that DigitalGlobe™ Inc. has in its constellation today. This includes QuickBird, WorldView 1, 2, 3, and 4, as well as IKONOS and GeoEye1. This gives us a unique ability to incorporate sensor-specific camera models into our software to more accurately extract information from DigitalGlobe data.

ENVI takes advantage of unique characteristics of DigitalGlobe data in order to answer geospatial questions and to solve problems. Three of these characteristics that I'd like to cover in this blog are spatial, spectral, and temporal.

All of DigitalGlobe's satellites capture imagery at better than 1 meter resolution, with many of them capturing data at better than 50cm resolution. A recent relaxation of operating restrictions on DigitalGlobe by the National Oceanic and Atmospheric Administration (NOAA) now means that DigitalGlobe will soon be selling imagery at better than 50cm resolution.,which enables ENVI to extract more precise information out of the data. An example below shows a three dimensional depiction of a WorldView-2 derived digital elevation model (DEM) with a pan-sharpened false-colored image overlaid on it.

The high spatial accuracy of DigitalGlobe data allows for the extraction of high resolution elevation models for a better understanding of on-the-ground conditions and terrain.

Another quality of the WorldView constellation is the unique spectral bands that are being captured by the sensors. WorldView 2 was the first high resolution satellite to capture data across 8 different imaging bands,and WorldView 3 boasts an impressive 27 bands earning it the title of the world's first super-spectral satellite. ENVI takes advantage of these bands by incorporating sensor-specific spectral indices that can be calculated easily from within the user interface. The latest release of ENVI includes 64 common spectral indices that can be calculated, with 44 of them capable of being run against WorldView data. These indices make it easy to analyze things like soil moisture, water content in a scene, vegetative health, and more. Below we can see a WorldView Improved Vegetative Index overlaid on top of a pan-sharpened WorldView-3 image.

This index takes advantage of the spatial and spectral resolution of the satellite to help us visualize and extract fields or other vegetated areas that are healthy versus those that may need some extra love and attention. Also, full support for spectral libraries means that ENVI can use DigitalGlobe data to accurately target and identify materials such as crop-type, mineral outcroppings, and more.

The final characteristic of DigitalGlobe data I wanted to highlight is the amazing temporal coverage they have over the entire world. The temporal completeness of the DigitalGlobe catalog means that they have the data needed to see and quantify changes that occur on specific areas of the Earth. In the latest release of ENVI, we've created a Spatiotemporal tool box that allows you to quickly and easily create raster time series from multiple images and to display those images as a component of time. Derived products can also be fed into the time series to show a specific analysis over time, or multiple time series can be run and linked together to show how two different image series interact with each other over time. We're really excited about the introduction of this capability into ENVI, and I look forward to seeing how we expand upon our understanding of temporal analytics in an effort to provide more robust solutions to the geospatial analyst.

As DigitalGlobe and Exelis Inc. work together to create the highest resolution, most spectrally unique satellite constellation in existence,our goal is to ensure that ENVI has all of the tools necessary to fully exploit these unique datasets and solve some of the world's toughest problems, geospatial and otherwise.

What do you think? What advantages do you see in the increased spatial, spectral, and temporal content being produced by DigitalGlobe today?


***This blog is based on a Webinar given October 14, 2014 in conjunction with DigitalGlobe Inc. To view the webinar, please feel free to visit http://digitalglobe.adobeconnect.com/p3oz5b2h67f/

Comments (0) Number of views (404) Article rating: 5.0

9

Oct

2014

UAV’s as a Remote Sensing Platform for Agriculture

Author: Tracy Erwin

Over the last couple of years, we have seen news about UAVs and their utility in commercial and civil markets.  I have a particularly strong interest in UAV technology that reaches back to my days as an Electrical Engineering student designing a hovercraft resembling a quad copter, controlled via a software application providing aircraft position and other telemetry back to the pilot. Adding to this industry interest is that one of the FAA’s test sites is nearly in my backyard, Griffiss International Airport Unmanned Aircraft Systems (UAS) in Rome, N.Y.  Furthermore, in early August 2014, the FAA approved the Northeast UAS Airspace Integration Research (NUAIR) and Griffiss International Airport’s first official Certificate of Authorization (COA) to test unmanned aircraft. This first COA allows operation of unmanned aerial system for agriculture led by Cornell Cooperative Extension (CCE). CCE has provided tremendous benefits to our local agricultural industry and I am pleased to see that they are getting this effort started.

CCE will fly a UAS manufactured by Precision Hawk. Precision Hawk’s Lancaster Hawkeye Mk III, a small fixed-wing aircraft, will carry visual, thermal and multi-spectral sensors. The UAS will fly below 400 feet over farms in western New York evaluating field crops such as corn, soybean, wheat and alfalfa. The collected data will be used to monitor crop growth, insect activity, disease spread, soil conditions and more.

 

Photo courtesy of Finger Lakes Times a Precision Hawk Lancaster Hawkeye Mk IIIUAV

Precision farmers today use aerial and satellite remote sensing imagery to help them more efficiently manage their crops. By measuring precisely the way their fields reflect and emit energy at visible and infrared wavelengths, precision farmers can monitor a wide range of variables that affect their crops. Multi-spectral and thermal sensors allow farmers to view problems that were not possible with the naked eye or with panchromatic aerial imagery.

Precision Hawk’s sensors will be collecting data at the necessary wavelengths required to extract meaningful information to answer questions such as “what is the health of my crop”. Using a UAV as the platform for remote sensing still requires remote sensing practices to provide the most accurate answers to grower’s questions. A fellow colleague identified a best practices approach to ensure generation of output products of the highest quality to solve a problem at hand.  The best practices approach is: (1) identify the algorithm needed to solve the problem, (2) determine the wavelength input for the algorithm, (3) establish what sensor(s) can collect those wavelengths, and (4) decide the platform equipped to fly the payload.

The best practices were recently applied comparing normalized difference vegetation index (NDVI) products using raw sensor data from a UAS payload, and data that were corrected after applying calibration to each band in the image. NDVI is a common benchmark for determining vegetative health.

Figure1: UAS image and NDVI calculated from original image prior to applying any data calibration or correction.

Referring to Figure 1(raw data)

  • Total vegetation represents 95.76% of the field with dry soil or non-vegetation representing 5.24%.
  • Dry vegetation (yellow) accounts for 48.73% whereas healthy and very healthy vegetation (shades of green) accounts for 46.03%.

Figure 2: UAS image and NDVI calculated after applying data calibration and correction.

The results shown in Figure 2 with data calibration and correction produce a more accurate assessment.

  • Total vegetation represents 87.64% of the field with dry soil or non-vegetation representing 12.36%.
  • Dry vegetation (yellow) accounts for 3.47% whereas healthy and very healthy vegetation (shades of green) account for 84.18%.

The results show that to provide the grower with the most accurate information requires more than just collecting data and running that raw data through an algorithm. Using best practices such as those identified by my colleague, growers can focus on specific problems or information with different sensors and algorithms. I would imagine that as this advances, there would be automated processes that will automatically provide answers to grower’s questions.

The benefits of utilizing UAV’s in agriculture is being able to see a field in its entirety, which can be time consuming and unrealistic for farmers to do on foot. The fields can be viewed as frequently as desired and at a lower cost than utilizing a manned airborne platform or satellite imagery.  

Additionally, UAV’s equipped with appropriate sensors can use the collected data in a science-based approach, enabling them to identify problems faster and execute treatment plans to prevent the spread of disease or pests that could affect an entire field. Then they can fly the fields again repeating the process within a week or two to monitor change. According to the agronomist from CCE that will be flying the Precision Hawk UAV, farmers are spending tens of thousands of dollars now on crop health: how green are the plants, as well as tracking disease and harmful insects. Efforts like CCE’s will provide insight into the benefit of UAV’s in the agricultural industry and what the true benefit to the grower is.  Of course, the goal is higher crop yield resulting in higher profits. In addition, reducing crop treatment costs by targeting only areas requiring it all while reducing the negative impacts of farming on the environment that come from over-application of chemicals.

I look forward to monitoring Cornell Cooperatives Extension’s effort and will keep my eyes in the sky to see if they may be flying the vineyards around my hometown. 

Comments (0) Number of views (459) Article rating: 5.0

Categories: ENVI Blog | Imagery Speaks

Tags:

6

Oct

2014

LSST Lights the Way Towards the Frontiers of Data-driven Discovery

Author: Peter DeCurtins

New window on the Universe will drive advances in extracting knowledge from massive amounts of data

Physics has long been the crucible in which Big Data has been wrought. High energy particle accelerators and large telescopic arrays have long deluged researchers with ever increasing amounts of data for analysis. But even the Large Hadron Collider at CERN in Switzerland, a place that pumps out and stores 25 petabytes of data per year, pales by comparison to the challenges presented to scientists by the Large Synoptic Survey Telescope (LSST) project.

LSST 

Credit: LSST

Construction officially began in August on LSST, an 8.4 meter, very wide field (3.5 degrees in diameter) view reflecting telescope with a 3.2-gigapixel prime focus digital camera. When fully completed sometime in the early 2020's, that camera will take a 15-second exposure every 20 seconds, capturing up to 30 terabytes of data per night, and effectively imaging the entire southern sky every few nights from its mountaintop perch in Chile.

 

 Credit: LSST

Factoring in scheduled time for maintenance and cloudy nights and such, the camera will be taking over 200,000 pictures - 1.28 petabytes - per year. The data volumes are such that the challenge goes beyond simply collecting the images. The challenge lies in mining and extracting scientific knowledge from petabytes of highly complex data.

 

The telescope itself is innovative, using a tertiary mirror that will be cast simultaneously and from the same substrate as its primary. The LSST design provides a very wide, undistorted view that will enable the recording of 20 billion cosmological objects 800 times each over a ten year data collection. The database will contain approximately 500 attributes for each survey object, and grow to something like a 100 petabytes in size.

 

Credit: LSST


In the end, the project will provide a color movie of the universe visible through the whole southern sky that plays back all its changes over ten years. That's the synoptic part. The survey will catalog the orbits of asteroids down to 100 meters in diameter that potential could impact the Earth, detect weak gravitational lens signatures of dark energy and dark matter, and create an exquisitely accurate map of the Milky Way that will enable study of the structure and evolution history of our home galaxy.

 

Credit: LSST Project Office


What's really exciting, however, is that scientists don't yet know what discoveries will be contained in that 500 dimension data cache, and finding correlations among the various attributes is something that will require advances in data management, mining and analysis before the science can even begin. It is hoped that such a vast volume of complex data will lead to serendipitous discoveries.  Near-real-time alerts will be issued when the system automatically detects objects which have changed in position or brightness. New algorithms will run on machines pouring over massive volumes of data searching for something interesting without knowing before hand what necessarily constitutes being interesting. It will have to be done that way, because the task is beyond what any one human or group of humans can accomplish. In a way, LSST appears to be an outpost on the frontier of data-driven analytics and discovery.

 

Comments (0) Number of views (404) Article rating: No rating

Categories: ENVI Blog | Imagery Speaks

Tags:

12345678910 Last

MOST POPULAR POSTS

AUTHORS

Authors

Authors

Authors

Authors

Authors

Authors

Authors

Authors

GUEST AUTHORS

Authors

© 2014 Exelis Visual Information Solutions