Author: James Goodman
In a previous article we presented an overview of the advantages of cloud computing in remote sensing applications, and described an upcoming prototype web application for processing imagery from the HICO sensor on the International Space Station.
First, as a follow up, we’re excited to announce availability of the HICO Image Processing System – a cloud computing platform for on-demand remote sensing image analysis and data visualization.
HICO IPS allows users to select specific images and algorithms, dynamically launch analysis routines in the cloud, and then see results displayed directly in an online map interface. System capabilities are demonstrated using imagery collected by the Hyperspectral Imager for the Coastal Ocean (HICO) on the International Space Station, and example algorithms are included for assessing coastal water quality and other near shore environmental conditions.
This is an application-server, and not just a map-server.Thus, HICO IPS is delivering on-demand image processing of real physical parameters, such as chlorophyll concentration, inherent optical properties, and water depth.
The system was developed using a combination of commercial and open-source software, with core image processing performed using the recently released ENVI Services Engine. No specialized software is required to run HICO IPS. You just need an internet connection and a web browser to run the application (we suggest using Google Chrome).
Beyond HICO,and beyond the coastal ocean, the system can be configured for any number of different remote sensing instruments and applications, thus providing an adaptable cloud computing framework for rapidly implementing new algorithms and applications, as well as making these applications and their output readily available to the global user community.
However, this is but one application. Significantly greater work is needed throughout the remote sensing community to leverage these and other exciting new tools and processing capabilities. To participate in a discussion of how the future of geospatial image processing is evolving, and see a presentation of the HICO IPS, join us at the upcoming ENVI Analytics Symposium in Boulder, CO, August 25-26.
With this broader context in mind, and as a second follow-up, we ask the important question when envisioning this future of how we as an industry, and as are search community, are going to get from here to there?
The currently expanding diversity and volume of remote sensing data presents particular challenges for aggregating data relevant to specific research applications, developing analysis tools that can be extended to a variety of sensors, efficiently implementing data processing across a distributed storage network, and delivering value-added products to a broad range of stakeholders.
Based on lessons learned from developing the HICO IPS, here we identify three important requirements needed to meet these challenges:
· Data and application interoperability need to continue evolving.This need speaks to the use of broadly accessible data formats, expansion of software binding libraries, and development of cross-platform applications.
· Improved mechanisms are needed for transforming research achievements into functional software applications. Greater impact can be achieved, larger audiences reached,and application opportunities significantly enhanced, if more investment is made in remote sensing technology transfer.
· Robust tools are required for decision support and information delivery.This requirement necessitates development of intuitive visualization and user interface tools that will assist users in understanding image analysis output products as well as contribute to more informed decision making.
These developments will not happen overnight, but the pace of the industry indicates that such transformations are already in process and that geospatial image processing will continue to evolve at a rapid rate. We encourage you to participate.
About HySpeed Computing: Our mission is to provide the most effective analysis tools for deriving and delivering information from geospatial imagery. Visit us at hyspeedcomputing.com.
To access the HICO Image Processing System: http://hyspeedgeo.com/HICO/
Categories: ENVI Blog | Imagery Speaks
Author: Adam O'Connor
One of the most exciting new software technologies we will release this summer in concert with ENVI 5.3 is the ability to generate 3D point-clouds using stereo optical imagery from some of the more popular commercial spaceborne platforms (e.g. WorldView-3 and Pléiades-1). This functionality will be a new component of the upcoming ENVI Photogrammetry Module that will produce LAS format output files that can be used with ENVI's wide variety of point-cloud visualization & processing tools including raster DSM creation, 3D feature extraction and line-of-sight analysis.
This software technology generates 3D point-clouds from spaceborne EO/IR imaging platforms via multi-ray photogrammetry techniques that involve feature detector pixel correlation and dense image matching. This will allow users to take advantage of existing large commercial imagery archives to generate 3D point-clouds and create terrain products in regions where flying airborne LiDAR collections are not feasible or are cost prohibitive.
One of the testing datasets we are utilizing as we develop this software technology is the "Pléiades TRISTEREO" sample dataset available on the Airbus Defence & Space - Sample Imagery website. This tri-stereo dataset is located in Melbourne, Australia and below is a screenshot of an image subset with the Flemington Racecourse:
Author: Kevin Wells
I’m really looking forward to the upcoming GEOINT 2015 Symposium, which is taking place next week on June 22-25, at the convention center in DC. Each year, USGIF manages to assemble a distinguished list of keynote speakers, relevant panel discussions and breakout sessions that provide attendees with a unique opportunity to learn from leading experts, share best practices, and uncover the latest developments from government, military and private-sector leaders.
One of the most significant technological developments in our industry over the last year has been the satellite launch and subsequent collection of DigitalGlobe’s WorldView-3 data. With the access to 31 cm resolution imagery and SWIR coverage, this satellite is providing more answers from imagery to our community. As a result of this, Exelis personnel have been receiving frequent inquiries from our customers as to how this new imagery can be used to extract information.
Due to this interest, we decided that GEOINT would be a great opportunity for us to conduct a workshop where we provide attendees an opportunity to better understand the data being collected from WorldView-3, as well as how to use this data for extracting relevant intelligence. Working in collaboration with DigitalGlobe, Exelis Solutions Engineers have developed a session to provide the core skills needed for understanding how to create real-world products from SWIR information. Though focused on WorldView-3, the methodologies discussed will be applicable to other MSI/HSI data sources. This workshop is perfect current literal image analysts or others with limited spectral data experiences who are interested in learning more about using SWIR data. Scenarios used for the workshop will address Urban Mapping, Wildfire Mitigation, Flooding and Disaster Response.
If you are interested in registering for this workshop (or one of the many others that USGIF is organizing), check out the following link: http://geoint2015.com/agenda/EduTraining
Also be sure to stop by and say hello to us in the Harris booth #4059.
Author: Tracy Erwin
Situational Awareness for small UAVS (sUAV)
When I hear situational awareness (SA), I automatically think of Jagwire given that I have worked on this product from its inception. It is a geospatial data collection, management and dissemination system providing SA primarily for the warfighter and now reaching new markets.
As a participant at AUVSI show in May 2015, a different form of a SA system piqued my interest, that being sense-and-avoid (SAA) also referred to as detect-and-avoid (DAA). Aircraft can collect data providing situational awareness to various users and the pilots of the aircraft themselves need an understanding of their SA to prevent collisions. This is where sense and avoid systems come into play.
Photo courtesy of SKYTECH
Just prior to the show, colleagues from another division of our company launched Symphony RangeVue, an airspace situational awareness tool designed for unmanned aerial system (UAS) operations in the U.S. It can be used as a sense-and-avoid addition to UAS ground control stations having flexible geo-fencing tools to alert operators when a UAV approaches airspace boundaries or when other aircraft are in the vicinity.
What is SAA/DAA?
The names is self-describing, sense or detect an object around the aircraft such as other aircraft and natural threats like birds and avoid to prevent airborne collisions. UAVs need to be able to react to each other and with their surroundings. The technologies to create these SAA systems exist and there are systems being developed and tested for large unmanned vehicles used by the Air Force and Army. However, it is the integration and the size, weight and power (SWAP) of small-unmanned aerial vehicles (sUAV) that are proving to be challenging. Unlike traditional aircraft and larger UAVs, lower altitudes and speed are contributing factors too.
Why is it important?
Besides the obvious of safety, in order to have commercial UAVs fully integrated into the National Airspace System (NAS), the FAA must certify a sense-and-avoid system, which provides airborne collision avoidance capability. This year, the FAA has made it easier for the first small commercial UAVs to share the NAS (section 333 FAA Modernization and Reform Act). Furthermore, they have established an interim policy to speed up airspace authorizations for certain commercial unmanned aircraft operators who obtain Section 333 exemptions.
Photo courtesy of 3DR - FAA granted Section 333 authorization for commercial use of 3DR drones
Section 333 exemption holders automatically receive a “blanket”200 foot Certificate of Authorization (COA) and abide by a set of flight restrictions such as being within visual line of sight (VLOS). The details and can be found on the FAA website. This is a big step forward for this rapidly evolving technology and industry. You might be asking, “Does this mean Amazon will be delivering our packages via a quad copter”? Not yet and not likely, anytime soon.
The technologies and challenges
In order to see aircraft as well as wildlife, many sense-and-avoid systems use a mix of sensors: cameras detecting both visible and infrared,radar, LiDAR, including traffic collision avoidance system (TCAS) and automatic dependent surveillance-broadcast (ADS-B). Not all aircraft are equipped with TCAS and ADS-B systems and only detect other aircraft with a corresponding transceiver or transponder.
Manual sense and avoid relays the information to the UAV pilot.The sense and avoid technology has to be small and light in weight for sUAVs,which poses a challenge. Sufficient payload capabilities, its size and weight, need consideration if using traditional methods such as radar for the detection of other aircraft. In addition, the speed of the small UAV is typically slower than other aircraft. To identify aircraft travelling at faster speeds requires a much quicker reaction time from sense and avoid technology in order to avoid collisions. Further complexities are the power consumption and battery life of UAVs, as well as the need for the technology to operate indifferent weather conditions.
The UAV industry is rapidly evolving on many fronts. For example,DJI's recent announcement of the first guidance system, a sense and avoid hardware addition. A combination of ultrasonic sensors and stereo cameras allows the UAV to detect objects up to 65 feet away and keep the aircraft at a preconfigured distance. The guidance system works with DJI's new Matrice 100 UAV.
Photo courtesy of DJI: DJI’sGuidance and the Matrice 100 UAV with the Guidance
Down the road…
This exciting new industry is still young with incredible growth and possibility ahead. Once the hurdles of safety and reliability are solved,it will open the door to allowing UAVs to share the airspace and become used for a wider range of purposes other than monitoring crops fields, utilities and the like.
Author: Kip Hudson
The tactical edge of the GEOINT Enterprise is arguably its most technically and operationally complicated component. GEOINT data publishers and subscribers on the tactical edge are challenged to meet key performance parameters, layer sensor tasking requests, access and disseminate mission data (real-time and historical) across the spectrum of users, support specialized analysis instrumentation systems, provide multi-level security for partner nations, and provide local or remote site cross-modality metadata syndication. Tactical locations which are producing GEOINT (through apparatuses like Group III Unmanned Systems) are also challenged with coordinating and conducting a mission in real-time.
The above diagram depicts three separate data domains (Unclassified, Secret, and Coalition Secret) used across conventional and non-conventional operations center environments. For the most part, tactical operations tend to have the richest and most relevant access and utility for GEOINT on the Secret domain whether you’re dismounted, in an aircraft, or within an Operations Center coordinating and collaborating on real-time or historical content.
Some of the components that make the forward GEOINT enabled Operations Center more cognitively demanding than other locations are that there’s no automated multi-level security guard and there are a large number of client interfaces that require different data transformations. The combined layered and specialized systems within a forward operating center supporting tactical GEOINT causes an increase in cognitive load on mission coordinators, communication support agencies, and analyst. This makes it imperative to automate technical support to the greatest extent possible and design component systems to be plug and play.
Interestingly, out of the entire above ecosystem of interrelated components, I’ve found only one device (Jagwire) that truly enables GEOINT to support operations across the spectrum of user interface needs and transmission system requirements.I’ve actually highlighted one physical instance of Jagwire in the above diagram at all the locations it would be used and supported from.
My first introduction and use of Jagwire occurred on my deployment to Operation Enduring Freedom (OEF) in 2010-2011 by a gentleman and dedicated civil servant from Program Management Army Unmanned Aerial Systems (PM UAS) named Mr. Phil Owen. It was introduced under two separate names (Intelligence Surveillance Reconnaissance Information System Version Three (ISRIS V.3) and Mobile Data Archival and Retrieval (MDAR)). Nomenclature variances aside, the streamlined application of Jagwire in an incredibly demanding operational environment (with temperatures upwards at times of 120 degrees Fahrenheit) saved lives and defined the next decade of Marine Air Ground Task Force(MAGTF) mission enabling capabilities.
Jagwire functioned as a local multi-purpose tool continued to be both scalable and flexible in supporting my deployed mission needs. Overtime, basic requests of the system incrementally became more demanding, resulting in a massively enabling architecture that supported: variable bit rate data transcoding, all Common Operational Picture (COP) XML data transformations (DDS, OGC, CoT), real-time GEOINT analytics (which have since evolved to now include disruptive design features such as feature extraction, small craft identification, and relative water depth analysis), out-of-band digital metadata fusion with analog metadata, analog to digital data conversion, and user-based account restrictions.
One of the most important enabling operational features gained by using Jagwire is that it only uses open standards and interfaces for all Master Metadata Management (MDM) functions (necessary to support Information-as-a-Service Information Governance). As a background enabling capability, Jagwire feeds into a myriad of local and distributed system instrumentation systems like Falcon View, the Tactical Exploitation Group (TEG),Intelligence Operations Workstation (IOW) over Tactical Data Networks likeLink-16 and ANW2.
I was also able to stream real-time transcoded video via Jagwire to my cousin who was an enlisted Communications Marine Northeast of me installing the Marine Corps first operational instance of ANW2 Harris radios in the vicinity of Musa Qala District. Later on I found out that the data he was able to get from organic MAGTF ISR via Jagwire saved the lives of Marines and Afghan National Army members by providing needed context to their decision making process.
Out of my experiences I learned a few things, not the least of which is that simplicity scales. Jagwire is elegantly simplistic and incredibly enabling. The application of one Jagwire enabled me to increase my Squadrons C4ISR data sharing from GEOINT sensors across my battle space by 400 percent, directly supporting all of my Mission Essential Task Lists.