Author: Adam O'Connor
In the upcoming ENVI 5.2 Service Pack 1 release we made numerous enhancements to our Esri Image Services support including the ability to use the image services available with an ArcGIS Online subscription account. The new image services functionality includes support for username & password via token-based authentication and is supported in both the ENVI application plus associated API (ENVI::OpenRaster). To take the improved image services support for a test drive I took advantage of Esri's 60-day free ArcGIS trial which provides access to 200 ArcGIS Online service credits.
To get started I went to the ArcGIS Online website and clicked on the Sign Up for a Free Trial Now link button at the bottom of the webpage. After completing the account creation and registration process I received the message "Your ArcGIS Online trial account is now active" then shortly thereafter received an e-mail with a temporary ArcGIS Online Subscription Account ID.
To test the new image services support I decided to use the World Elevation dataset as the DEM in a RPC orthorectification of a WorldView-3 Ortho Ready 2A Standard data product (courtesy of DigitalGlobe). This process involved the following steps:
- Launch ENVI 5.2 SP1 (this won't work in older versions)
- File > Open and select the WorldView-3 multispectral dataset *.TIL file
- File > Remote Connection Manager
- Connection > Add ArcGIS Image Services Server...
- In the resulting dialog I enter "elevation.arcgis.com" in the URL field followed by my ArcGIS Online username and password then press the "OK" button:
Back in the Remote Connection Manager dialog I see a "Retrieving Information..." message while ENVI is communicating with the ArcGIS Online server. Once the necessary information on the available image services is obtained I can select the Terrain dataset. In order to open the real 32-bit floating-point terrain data from this image service (which is what I want for RPC orthorectification processing) I need to change the "Image Format" parameter to "TIFF" before pressing the "Open" button:
Once the "Terrain" raster dataset layer is loaded and displayed I can close the Remote Connection Manager dialog (it also helps to drag the WorldView-3 image back on top in the Layer Manager so it is visible). Next I launch "Geometric Correction/Orthorectification/RPC Orthorectification Workflow" from the ENVI toolbox along the right-hand side (Hint: simply enter "rpc ortho" in the "Search the toolbox" at the top-right). By default the RPC Orthorectification workflow will use the "GMTED2010.jp2" file included in the ENVI software installation so next to the "DEM File" box I need to press the "Browse..." button then select "Band 1" under the "Terrain" raster dataset:
After pressing "Next>" to advance the workflow to the final step I change the "Grid Spacing" parameter on the "Advanced" tab to "1" so the RPC-based transform is calculated for each pixel in the input WorldView-3 image. Finally I specify an output filename and press "Finish" to execute the RPC orthorectification process. As you can see in the following animation the orthorectification definitely has a significant impact on the geo-positional accuracy of this dataset:
Categories: ENVI Blog | Imagery Speaks
Author: Joey Griebel
Fourteen years ago, prior to the start of the “Global War on Terror”, image analysis was rarely something you would see in a Hollywood production or PrimeTime TV show. Somewhere along the way, there was a turning point and the writers and producers realized the image analysis and intelligence piece helps tell the story. While some of us have known this goes on in the background and helps lead to the big decisions/missions we see in movies, most of the viewers do not.
One of the first big movies I remember seeing this in was "Act of Valor" when a Raven UAV is launched and the Navy Seals are streaming a FMV (full motion video) feed and identifying targets prior to the raid. To the average person, that might seem like some sort of James Bond gadget, but to those in the field, it's known technology and a real capability. The next big Hollywood release that the intelligence piece caught my eye in was “Zero DarkThirty”, when the analysts are looking at the products created of Bin Laden’s compound and planning the raid. 3d models were then created from these products and later on it was learned that entire training compounds were modeled off of this information. Once the mission went live in the movie we saw the FMV/ drone feed again and POTUS watched a live stream as the Helo’s approached:
Fast forward to this year and “State of Affairs” had its first season launched. This show glamourized the intelligence field, and evenbrought in a small glimpse into weather monitoring. Almost every episode involves the intelligence opps room with a wide variety of analytics up from FMV feeds, weather feeds, IR images, and imagery of all sorts:
So why does any of this matter? While most of us have known for a while who work in or with the field, the value of image analytics and its role in stories that shape our world are finally getting the recognition and glory they deserve. The technology in the image analysis arena continues to grow and the possibilities are really becoming endless as to what you can track, detect, analyze, or really just watch. With the next gen technologies being flashed on TV shows and movies, I hope it plants a seed and starts to inspire the next generation who will really push the limits of our field and take the science that has formed what we know to the next level. Image analytics can be exciting and not only save lives, but save environments as a whole, and hopefully the more mainstream it becomes the further the envelope is pushed.
Author: Matt Hallas
We are taking a short break from the Spectral Hourglass Workflow Series to discuss file storage. This blog will focus on the file sizes associated with the Regions of Interest and Scatter Plot Tools, and more specifically when these tools are used to create training data for the purposes of land use classification All classification was done using a Landsat 8 scene collected by the USGS. For more information on Land Classification as it pertains to ENVI consider checking out some of the courses we offer; Spectral Analysis, Exploring ENVI, and Vegetation Analysis as well as videos, whitepapers and case studies found on our website.
Just because a large number of pixels are contained within an ROI does not mean that the file size will be similarly large. The ROI Tool has the ability to contain an immense amount of pixels within each class and have this stored in the .xml file in a few lines of code. If you use ENVI (or many other applications) you should have some .xml files on your machine, you can view them with a code-editing software. A powerful capability within the ROI tool is the ability to apply a threshold on a band for the purposes of classifying a feature type or endmember, and this takes up a mere 10 lines of code.
Drawing large polygons in the ROI Tool will not increase the file size significantly due to the fact that the only statements required in the code are the coordinate system of the data and the coordinates of the vertices of the polygon.
The Pixel Tab should be used for small groupings of pixels or individual pixels. When utilizing the functionality contained within the Pixel Tab incorrectly your file size increases drastically as all of the pixels are referenced in a Samples, Lines (X,Y) structure, which can be cumbersome. It is intended to be applied to a small number of pixels when you have accurately identified the desired feature/endmember and wish to create an ROI.
Overall the file sizes are not a major concern. However, there are some things to be wary of much like the use of the Pixel Tab. When you utilize the Scatter Plot to import a large number of pixels into an ROI Class, one million pixels or greater, it will result in large file sizes. This is due to the fact the spatial information for the ROI is organized in the same way as the Pixel Tab functionality, Samples then Lines. It is easy to see how when more than one million pixels are referenced individually by their exact location via Samples and Lines, the space needed to store that amount of information would be vast in comparison to Band Threshold or drawing polygons.
The Scatter Plot is incredibly useful for all types of imagery, but specifically with Hyperspectral datasets where the spatial extent is relatively small in comparison to multispectral datasets such as this Landsat 8 scene.
The best way to use the Scatter Plot when working with large swaths of data is to zoom in to a closer spatial extent and deselect the Full Band option within the Scatter Plot dialogue. This will still allow you to compare the spectral statistics of your data, but will save space in data storage. When you attempt to export more than one million pixels at a time from the Scatter Plot Tool into an ROI you will encounter a warning message, if data storage is of a concern it would pay dividends to be wary of exporting this large number of pixels.
If you have any questions please feel free to send me an email at firstname.lastname@example.org.
Tags: IDL, ENVI, Exelis, Spectral Analysis, Landsat, geospatial, Programming, environmental monitoring, land cover, earth observation, Data Storage, .xml, File Size, File Storage, Land Classification, Land Cover Classification
Author: Peter DeCurtins
I just finished re-reading a book about Magellan's historic accomplishments. Almost 500 years ago exactly, Magellan's voyage of discovery effectively illuminated for the first time the gloom that had obscured a valid understanding of world geography. Having spent so much of my life involved in issues of geospatial information technology, I couldn't help but note that many of the challenges that Magellan faced were directly due to the limitations of navigation at the time. The same basic question that confronted the famous explorer must also be answered today when traveling: where am I? Today we know first to ask, what time is it?
Ferdinand Magellan. The legend reads "Ferdinand Magellan, you overcame the famous, narrow, southern straits." Public Domain
It had long been known that the planet is spherical in shape. The ancient Greek geographer Eratosthenes even calculated the circumference of the Earth with remarkable accuracy around 240 BC. But Europeans at the dawn of the Age of Discovery labored under a staggering assortment of misconceptions and myths as to the geography of the world that they desired to explore. The three continents known to antiquity were seen as being surrounded by water - the Ocean Sea. The southern hemisphere much beyond the equator had only recently begun to be explored. Columbus had studied the writing of Eratosthenes on the size of the globe, but believed that the circumference of the Earth was far smaller than it actually is. In attempting to reach the east by sailing west and circumnavigating the world, Columbus remained forever confused about the actual location of the lands he encountered in what would come to be called the Americas.
Fra Mauro map, c. 1450. Medieval Europeans lacked knowledge of world geography. Public Domain
Two and a half decades after Columbus sailed to the "Indies", Magellan desired to accomplish what his predecessor had failed to do: reach east Asia (and its rich wealth of spices) by sailing west. In order to do this, he knew he would first have to discover and then successfully navigate through a straight that was assumed to exist at some southern latitude and extend through the South America land mass, connecting the Atlantic with the ocean on its western shores, which we now know as the Pacific. Before he could even encounter this fabled straight, he would be sailing off the edge of any map in existence, truly into the unknown. From that point on, he would be, in effect, writing the map that would describe these portions of the previously unknown world. How could he have known where he was going? How did he even know where he was?
Navigators of that era had reasonably good methods for knowing their latitude, or how far north or south from poles they were, using instruments like an astrolabe to "shoot" the position of some heavenly body as it passes through the line connecting south and north (the meridian). It was common to measure the inclination of the sun at noon each day. The difficulty lay in determining how far east or west of any given meridian they were, termed "the problem of longitude". The only navigational technique available was "dead reckoning", the process of estimating one's position based on known or estimated speeds over elapsed time and course from a previously determined fixed position. Time was kept by religiously monitoring and turning over sand-filled hourglasses, and the speed of a vessel was estimated based on the number of knots in a rope that would pass the stern of the ship after a log that it was tied to had been thrown overboard. Such a method is greatly prone to cumulative error and is inappropriate for transoceanic voyages, but it was the only method available to sailors in Magellan's time.
Mariner's astrolabe in Maritime Museum, Lisbon. Public Domain
As we know, Magellan succeeded in finding and navigating the eponymous straight that was the only available sea route between the Atlantic and the Pacific prior to the Panama Canal. The expedition thought that they were sure to be in Asia soon, for the true size of the Earth was still being greatly underestimated, as it had been by Columbus. In fact, if they had realized just how vast the Pacific Ocean is, they never would have attempted to cross it with the vessels and provisions available. They managed to survive the journey (barely), and although Magellan died in the islands that would become known as the Philippines, one of the five ships and some 22 survivors of the crew (out of around 250 that had left) eventually limped back to Spain. After three years and a voyage of some 60,000 miles, these were the first men in history to literally sail around the globe.
Magellan's flagship Victoria was the only ship to complete the voyage. Public Domain
Having tracked and kept meticulous logs of their voyage, the men were astounding to discover that they had somehow 'lost' a day in the course of their journey. What they thought was a Sunday was actually Monday. They had no concept of an International Date Line, and were the first to experience this temporal effect of circling the planet.
It's that element of time that is fundamental to the answer of navigation and position. Since the Earth rotates at a steady rate of 15 degrees per hour, there is a direct relationship between time and longitude. The problem of longitude was eventually solved by building mechanical timepieces that could be carried on a ship to maintain accurate time at a reference meridian. With that, the relative longitude to that line (say, passing through Greenwich) can be calculated using the measured position of the sun.
The longitude relative to a position can be determined by the position of the sun - if you know the time! Image by Duff06, CC0 1.0
Today nobody needs to wonder where they are. Like those ancient mariners, we can rely on a constellation of heavenly bodies to fix our position, but those "stars" are satellites that emit radio signals. Like the navigator of one of Magellan's ships, your GPS receiver makes measurements regarding the location of these celestial objects in order to determine your position. But while sixteenth century mariners used sand through an hourglass to measure the passage of time, today every GPS satellite broadcasts its own meticulously kept time signal, based on a very stable atomic clock that it carries on board. Synchronized with each other and to even more accurate master atomic clocks on the ground, GPS satellites keep time accurate to almost 10 nanoseconds. Such precision is astounding to contemplate, but it's what enables GPS positioning to be so accurate. Also worthy of contemplation is that every time we use GPS, we are practicing essentially the same art that Magellan relied on in his quest to sail to the Spice Islands.
A civilian GPS receiver in a marine application. Photo by Nachoman-au, CC BY-SA 3.0
Author: Tracy Erwin
The Federal Aviation Administration’s (FAA) announcement last week was welcomed news for the U.S. Commercial Unmanned Aerial Vehicle (UAV) market. On February 15, the FAA released proposed rule changes. The key components of the new proposed rules include keeping UAV’s well clear of other aircraft and mitigating the risk to people and property on the ground.
Prior to the Proposed Ruling this week exemptions were required, were lengthy and they were strict. For example, On January 5, Douglas Trudeau became the first Realtor to obtain an FAA exception to fly an unmanned vehicle to capture video of houses for sale, but he was required to follow 33 detailed restrictions laid out in a 26-page letter.
Legally flying a UAV requires the user to have a regular pilot’s license , pass an aviation medical check up, be assisted by a spotter, request permission two days in advance, and limit flights to less than 35 mph and below 300 feet.
Key takeaways of new FAA proposal
What does this mean?
In addition to the mentioned key components of the new proposed rules of safety, the proposed new ruling is opening the doors to commercial markets. The following are a few examples of possible small Unmanned Aerial System (UAS) operations that could be conducted under this proposed outline:
The industry is expanding and only limited by our imaginations
It is exciting where the UAV market is heading! We often see and hear about UAV’s snapping pictures and acquiring video. In addition to traditional RGB sensors used in consumer cameras, there are infrared, thermal, Ladar/LiDAR and hyper-spectral, including a host of other types of sensors providing information that the naked eye cannot see. As this industry continues to move forward, I suspect that similar to the defense industry, there can and will be vast amounts of data collected requiring management, dissemination and processing solutions. Hence, there is a need for a content management and dissemination system.
There is the obvious desire for real-time awareness pertaining to disaster response and news media coverage. In addition to real-time response, I believe there is a requirement for a content management system to archive data for historical trending and post processing to yield actionable information.