28

May

2015

3-D point visualization techniques in IDL

Author: Atle Borsholm

I have recently investigated how to best visualize 3-D point data in IDL. There are many options available, with distinct pros and cons. First, let me describe the data I want to display. The most basic point data would be a 3xN array of [x,y,z] locations for each point. In some cases, there may also be a single grayscale value associated with each [x,y,z] triplet. A slight variation on this is that you may have a color [R,G,B] for each point [x,y,z]. Finally, in some cases, you may also have a surface connectivity array, describing how to connect the N points to make a surface representation. I will focus on the former cases where there is no known surface connectivity. I am using the Stanford bunny 3-D test dataset, which has connectivity information, but I am ignoring the connectivity information to more closely mimic real datasets.

1. ScatterPlot3D. This is a graphics function that offers very many keywords, and great control over how the points are rendered and also has the upside of adding coordinate axes to the visualization, in case that is important.

  help, pts, gray

  PTS             DOUBLE    = Array[3, 35947]

  GRAY            BYTE      = Array[1, 35947]

  s=ScatterPlot3D(pts[0,*],-pts[2,*],pts[1,*], magnitude=gray, aspect_ratio=1, rgb_table=7)

This visualization can be slow for large datasets, because each point is drawn as a circle. If you zoom in you can see the individual circles.

2. iPlot. The iPlot function which has keywords that allows for point visualization. It accepts a VERT_COLORS keyword which needs to be scaled 0-255, unlike the ScatterPlot3D which accepts any range. So, in this case, I apply a colortable to generate RGB. Using the /SCATTER keyword has the advantage of using dots as symbols, and therefore is very fast, and can handle larger datasets and still be responsive.

  loadct, 1

  tvlct, red, green, blue, /get

  iplot, pts[0,*],-pts[2,*],pts[1,*], vert_color=bytscl(gray), rgb_table=[[red],[green],[blue]], aspect_ratio=1, /scatter

 

The downside here is obviously that only the points are show with no real radius, i.e. as you zoom in, the points stay the same size on screen. You can set the point size using the THICK keyword. In many cases, the points represent a real radius that remains as you zoom in and out. For that, you can use the ORB() as a symbol to make a better filled in visualization.

  sym = orb(density=0.5,radius=1.0)

  iplot, pts[0,*],-pts[2,*],pts[1,*], vert_color=bytscl(gray), rgb_table=[[red],[green],[blue]], aspect_ratio=1, sym_object=sym, linestyle=6

 

3. XOBJVIEW. This is the most lightweight of the 3 methods. It does not add any axes or other convenient features like ScatterPlot3D and iPlot, but, it gives the fastest, most responsive visualization, and can therefore handle the largest datasets. The first picture shows a pure point visualization using IDLgrPolygon and STYLE=0.

  pal = IDLgrPalette()

  pal.loadCt, 1

  xobjview, IDLgrPolygon(pts, style=0, thick=2, vert_color=bytscl(gray), palette=pal)

 

Finally, this shows an example of using ORB objects to represent points using an IDLgrPolyline object within XOBJVIEW.

  pal = IDLgrPalette()

  pal.loadct,1

  sym = IDLgrSymbol(orb(radius=0.005, density=0.25))

  xobjview, idlgrpolyline(pts,linestyle=6,vert_color=bytscl(gray),symbol=sym,palette=pal)

 

 

Comments (0) Number of views (60) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

21

May

2015

An N-body College Reunion

Author: Jim Pendleton

A little over 30 years ago, the professor in my sophomore quantum mechanics class gave us a free range assignment. We could use a computer to solve any sort of physics problem. It need not be related to quantum in any way, which was just as well since I've never grokked the wave function in fullness.

This was back in the day when students training for careers in basic sciences were expected to know how to write their own analysis applications, and not mindlessly press buttons on spreadsheets or web pages. In no small part because the latter hadn't been invented yet.

I chose a problem in astrophysics. At the time there was academic research being performed on simplified galactic collisions, and the plots in the scientific journals were fascinating to me. N-body problems were well beyond my grasp, unless N was one or two.  

I had no access to supercomputers, and my classical mechanics training was only at the freshman level.  

But I liked pretty pictures.

It was challenging at the time to debug Fortran PLOT10 programs when one needed to plan for additional wait time in line at the campus computing center for the output and it took an hour for a Calcomp plotter to produce the (garbage) graph the (garbage) code was generating.

Unfortunately, my attempt burned down, fell over, and sank into the swamp. I vaguely recall I received an orange slice and a participant’s ribbon for the effort. Thanks for not failing me, Prof. Dutta.

But a decade later, and with access to IDL, I reattempted the feat during a competition to produce demos for IDL 5.0.

The results were more satisfying visually, if not more realistic physically.


I've left the code below more or less intact. It’s a main application.  Dig the DEVICE, DECOMPOSED=0 and parentheses instead of square brackets – 100% OG, and I don’t mean Object Graphics.

Essentially, two point masses represent galactic centers and 1000 mass-less test points (“stars”) are given initial circular paths around those centers. Both galactic centers then move relative to their center of mass with some initial conditions intended to provoke a visually appealing, if physically bogus, result.

I wrote a number of variants on this theme, color-coded the stars by speed relative to the center of mass, added other “galactic” masses, etc. If you find this interesting, try varying initial parameters such as the relative masses and velocities.

; initialization section
G = 4.4999e-12 ; pc^3 y^-2 Msol^-1
epsilon = 1.e5 ; yr
snapshot = 16.e6 ;yr
nsnapshots = 16
maxradius = 15d3 ; pc
initialsep = 2.*maxradius
Ngalaxies = 2
GalaxyColors = [150, 200]
Nsuns = 1000
vx = dblarr(Ngalaxies*(Nsuns + 1)) & vy = vx & vz = vx & x = vx & y = vx
z = vx & m = vx & rdist=  vx & thetadist = vx & ax = vx & ay = vx & az = vx
m(0) = 10.d9 ; Msol
x(0) = initialsep/2.*cos(!pi/4.)
y(0) = initialsep/2.*sin(!pi/4.)
m(1) = 9.d9 ;Msol
x(1) = -x(0)
y(1) = -y(0)
speed = 2*sqrt(G*m(1)/initialsep)
vx(0) = -speed*sin(!pi/4.)/2.
vy(0) = speed*cos(!pi/4.)/2.
vx(1) = -vx(0)*sqrt(m(0)/m(1))/3.
vy(1) = -vy(0)*sqrt(m(0)/m(1))/3.
; execution section
device, decomposed = 0
iseed = 138987
for i = Ngalaxies, Ngalaxies*(Nsuns + 1) - 1 do begin
rdist(i) = randomu(iseed) * maxradius
thetadist(i) = randomu(iseed)*2*!pi
m(i) = 1.
endfor
for j = 0, ngalaxies - 1 do begin
for i = ngalaxies + j*nsuns, ngalaxies + (j + 1)*nsuns - 1 do begin
x(i) = rdist(i)*cos(thetadist(i)) + x(j)
y(i) = rdist(i)*sin(thetadist(i)) + y(j)
z(i) = 0.
speed = sqrt(G*m(j)/max([Abs(rdist(i)),1.e-30]))
vx(i) = -speed*sin(thetadist(i)) + vx(j)
vy(i) = speed*cos(thetadist(i)) + vy(j)
vz(i) = 0.
endfor
endfor
loadct,15
!ignore = -1
!x.style = 2
!y.style = 2
window, 1, xsize=700, ysize=700, xpos = 0, ypos = 0
wset, 1
for t = -epsilon, snapshot*(nsnapshots - 1), epsilon do begin
ax(*) = 0 & ay = ax & az = ax
for j = 0, NGalaxies - 1 do begin
r = sqrt((x - x(j))^2 + (y - y(j))^2 + (z - z(j))^2)
r3 = r^3 > 1.e-20
Gprod = -G*m(j)/r3
ax = Gprod*(x-x(j)) + ax
ay = Gprod*(y-y(j)) + ay
az = Gprod*(z-z(j)) + az
endfor
vx = vx + epsilon*ax
vy = vy + epsilon*ay
vz = vz + epsilon*az
x = x + epsilon*vx
y = y + epsilon*vy
z = z + epsilon*vz
if (t eq -epsilon) then begin
rs = 1.5*initialsep
plot, [rs, -rs], [rs, -rs], color = 0
closedcircles,sym=1
for i = 0, NGalaxies - 1 Do Begin
oplot,[x(i),x(i)],[y(i),y(i)],color=galaxycolors(i)
endfor
!psym = 3
for i = 0, NGalaxies - 1 Do Begin
indices = indgen(nsuns) + (i*nsuns) + Ngalaxies
oplot, x(indices), y(indices), color=galaxycolors(i)
endfor
oldx = x
oldy = y
endif else begin
;
;	Erase the previous frame
;
closedcircles,sym=1
oplot,oldx(0:NGalaxies-1),oldy(0:NGalaxies-1), color = 0
!psym = 3
oplot, oldx, oldy, color = 0
;
;	Draw the galaxy centers
;
closedcircles,sym=1
for i = 0, NGalaxies - 1 Do Begin
oplot,[x(i),x(i)],[y(i),y(i)],color=galaxycolors(i)
endfor
;
;	Draw the stars
;
!psym = 3
for i = 0, NGalaxies - 1 Do Begin
indices = indgen(nsuns) + (i*nsuns) + Ngalaxies
oplot, x(indices), y(indices), color=galaxycolors(i)
endfor
;
;	Save the coordinates
;
oldx = x
oldy = y
empty
endelse
endfor
end

You will also need the following support routine.

Pro ClosedCircles, Symbol_Radius = Symbol_Radius, Color = Color
On_Error, 2
Ang = (360./47.*Findgen(48))/!RaDeg
X = Cos(Ang)
Y = Sin(Ang)
If (N_Elements(Symbol_Radius) ne 0) then Begin
	X = X * Symbol_Radius
	Y = Y * Symbol_Radius
EndIf
If (N_Elements(Color) eq 0) then Begin
	UserSym, X, Y, /Fill
EndIf Else Begin
	UserSym, X, Y, /Fill, Color = Color
EndElse
!P.PSym = 8
Return
End

Comments (0) Number of views (210) Article rating: 5.0

Categories: IDL Blog | IDL Data Point

Tags:

14

May

2015

User-Defined ENVITask That Masks Some Output Pixels

Author: Brian Griglak

My last Data Point blog post User-Defined ENVITasks in ENVI 5.2 SP1 introduced how you can use
the new ENVITaskFromProcedure class to create your own custom ENVITasks that
wrap any IDL procedure you have. Today I’m going to explore a concrete example
that combines a few more advanced topics:

  • Raster Statistics
  • Threshold ROI Masking
  • Saving the results with correct pixels masked out

This example is motivated by a recent Tech Support question I was asked to assist on, but I am fleshing it out more to be a little more useful to you the reader. The question was about how to use ENVIROIMaskRaster() to mask pixels out of a raster and then export the results so that the masked pixels would persist on disk. I could just focus on this part, but I thought it would be useful to also show how to build the ROI using the ENVI API, instead of just loading one from a file.

The intent of this task is to take an input raster, evaluate its histogram, and then mask out any pixels in the top or bottom N% of the histogram. This is similar to what our linear percent stretch modes do, but in that case pixels in the top and bottom of the are clamped to a 0 or 255 value in their respective band, with the values in between linearly scaled in that range. This task will make those extreme pixels invalid, so they are transparent when visualized and ignored by our analytics.

You might be worried that we need to iterate over all the pixels in the raster to build the histograms, but fear not, the ENVIRasterStatistics() function was introduced in ENVI 5.2, and it provides an optimized implementation of the histogram calculations. So we can utilize this function with the HISTOGRAMS and HISTOGRAM_BINSIZE keywords to get the histogram for each band. By default this function will return a 256 bin histogram, but for this task we will set the bin size to 1 so that we get (max – min + 1) bins for each band.

Once we have the histogram, we pass it into Total() with the /CUMULATIVE keyword. This will result in an array of the same size as the histogram, where each element is the sum of all the input elements with an index less than or equal to the output index. The last element of each output array will be equal to the number of pixels in the raster. So if we divide by the last element (or number of pixels), then we will get the cumulative distribution function (CDF) of the pixels for each band.

We can then use a WHERE test to find out how many bins have a CDF value less than N%, or greater than (1 – N%). This will tell us what the range of pixel values will be if we want to eliminate the top and bottom N% of the pixels. We use these values to create a threshold ROI, using the ENVIROI::AddThreshold procedure.

We have two options for how to use the ROIs to mask the raster. We can create 1 ROI and add separate thresholds to it for each band, or we can create multiple ROIs that each have one band threshold. The results will be the same when we use the ENVIROIMaskRaster() virtual raster function, only pixels that are outside of every threshold are masked out. This is a logical AND operation, but I want a logical OR operation, where every pixel that is outside of any threshold will be masked out. To do this I need to create separate ROIs for each band threshold and apply them consecutively using the ENVIROIMaskRaster() function. The nice benefit of the virtual raster functions is that they don’t commit anything to disk, they build up a filter chain of raster operations that are executed on the pixels on a per tile basis, so there is very little performance overhead when doing this.

Once we have the masked raster, we want to save it to disk so that the masked pixels will stay masked. We don’t want to waste disk space by writing out are array of Boolean values, so the “data ignore value” is used to mark the masked pixels as invalid. Unfortunately there isn’t a one line solution to this in ENVI 5.2 SP1, so we have to “burn in” the data ignore value for the masked pixels. In ENVI 5.3 we have added this feature with a DATA_IGNORE_VALUE keyword on ENVIRaster::Export, but in the meantime we need to create a new ENVIRaster object and explicitly change the masked pixel values so that we can persist the masking. So we use the ENVIRaster() factory function, and use the INHERITS_FROM keyword to copy over all the metadata and the spatial reference object from our input raster. We will also supply the URI that this output raster will be written to, and specify the DATA_IGNORE_VALUE we want to use, so that the ENVI header file will contain that metadata field and properly mask out the pixels for us.

The ENVIRasterIterator class makes the pixel processing a lot simpler. We first need to get a raster iterator, which is via the ENVIRaster::CreateTileIterator() function. We’ll omit the keywords, as we want to process the whole raster and will let it decide the optimal tile size to use. There are a few ways to use the iterator, and while I’m usually partial to foreach loops over for loops, we can’t use the foreach syntax this time because we need to get the PIXEL_STATE output from the ENVIRasterIterator::Next() fuction. Once we have the PIXEL_STATE array, we use WHERE to find all the non-zero values (pixels that either already have the data ignore value or are masked out), and then we replace all the corresponding pixels with the data ignore value. We then use the ENVIRaster::SetTile procedure to copy the pixels to the output raster, which takes the current raster iterator object so that it knows the line/sample/band extents of the pixel array and commits it to disk correctly. Once we’ve finished iterating over the raster and copying the masked pixels to the output raster, all we need to do is call ENVIRaster::Save on the output raster to finish committing the pixels to disk and to generate the header file for us.

Here is the final version of my procedure, which uses keywords for the input raster, the threshold percentage, the data ignore value, and the output filename to use:

pro maskExtremePixels, INPUT_RASTER=inputRaster, THRESHOLD_PERCENT=percent, $
                       DATA_IGNORE_VALUE=dataIgnoreValue, OUTPUT_FILENAME=outFilename
  compile_opt idl2
  ; calculate raster statistics and histograms
  stats = ENVIRasterStatistics(inputRaster, /HISTOGRAMS, HISTOGRAM_BINSIZE=1)
  ; create local variable to represent current virtual raster, starting with input raster
  currRaster = inputRaster
  ; iterate over list of histogram hashes
  foreach hist, stats['HISTOGRAMS'], band do begin
    ; calculate cumulative distribution function from histogram
    cdf = Total(hist['COUNTS'], /Cumulative)
    cdf /= cdf[-1]
    ; find CDF bins that are above/below threshold
    lowInd = Where(cdf lt percent, lowCount)
    highInd = Where(cdf gt (1.0 - percent), highCount)
    ; calculate digital number that corresponds to threshold bins
    minVal = stats['MIN', band] + lowCount
    maxVal = stats['MAX', band] - highCount
    ; create ROI to mask the input raster
    roi = ENVIRoi(NAME='Band Thresholds', COLOR='Blue')
    ; add threshold definition for current band to ROI
    roi.AddThreshold, inputRaster, band, MIN_VALUE=minVal, MAX_VALUE=maxVal
    ; apply ROI to raster to mask out extreme pixels, and update current reference
    currRaster = ENVIRoiMaskRaster(currRaster, roi)
  endforeach
  ; create raster to commit masked pixels to disk
  outRaster = ENVIRaster(INHERITS_FROM=inputRaster, $
                         DATA_IGNORE_VALUE=dataIgnoreValue, $
                         URI=outFilename)
  ; create a raster iterator to access data
  iterator = currRaster.CreateTileIterator()
  for i = 1, iterator.nTiles do begin
    ; get the current tile's pixel and pixel value state arrays
    tile = iterator.Next(PIXEL_STATE=pixelState)
    ; identify any pixels which should be masked out
    maskInd = where(pixelState ne 0, maskCount)
    ; apply pixel mask if any pixels in this tile are masked
    if (maskCount gt 0) then begin
      tile[maskInd] = dataIgnoreValue
    endif
    ; push masked pixels into output raster
    outRaster.SetTile, tile, iterator
  endfor
  ; save output raster and header to disk
  outRaster.Save
end

 

Now we want to turn this into an ENVITask, so that we can run this on the desktop and in the cloud using ENVI Services Engine. All we need to do is build a task template file that describes the input parameters to this procedure:

{
  "name": "MaskExtremePixels",
  "baseClass": "ENVITaskFromProcedure",
  "routine": "MaskExtremePixels",
  "displayName": "Mask Extreme Pixels",
  "description": "This task processes the histogram of each band on the input ENVIRaster and builds pixel masks that exclude all pixels that are in the top or bottom percent of the histogram.",
  "version": "5.2.1",
  "parameters": [
    {
      "name": "INPUT_RASTER",
      "displayName": "Input Raster",
      "dataType": "ENVIRASTER",
      "direction": "input",
      "parameterType": "required",
      "description": "Specify the raster to mask."
    },
    {
      "name": "THRESHOLD_PERCENT",
      "displayName": "Threshold Percent",
      "dataType": "FLOAT",
      "direction": "input",
      "parameterType": "required",
      "description": "Percent of histogram to mask out from top and bottom."
    },
    {
      "name": "DATA_IGNORE_VALUE",
      "displayName": "Data Ignore Value",
      "dataType": "DOUBLE",
      "direction": "input",
      "parameterType": "required",
      "description": "Value to use for masked out pixels in OUTPUT_RASTER."
    },
    {
      "name": "OUTPUT_RASTER",
      "keyword": "OUTPUT_FILENAME",
      "displayName": "Output Raster",
      "dataType": "ENVIRASTER",
      "direction": "output",
      "parameterType": "required",
      "description": "Masked version of INPUT_RASTER."
    }
  ]
}

 

For the most part, this task template is pretty straightforward. We provide a name and description, then since we are wrapping a procedure we use the ENVITaskFromProcedure class for baseClass and the procedure name for routine. Then we have the list of parameters, with their name, displayName, dataType, direction, parameterType, and description attributes. The only anomaly to this is the OUTPUT_RASTER parameter. If we look at the procedure signature, you’ll see the keyword OUTPUT_FILENAME, which is an input string telling the procedure where to write the raster to disk.  When we wrap it as a task, we want to make life easier for task consumers, so we make the output parameter of type ENVIRaster, and set its keyword to OUTPUT_FILENAME. When this task template is loaded, the task framework will automatically add an input parameter of type ENVIURI that wraps this same keyword, appending the “_URI” suffix to the original parameter’s name. So the user could explicitly set the OUTPUT_RASTER_URI parameter value to control where the masked raster is written to, or they can leave it unset in which case the task framework will generate a temporary filename for that keyword.

Once you copy this task template to a file with the extension .task and the procedure PRO file to the custom code folder (C:\Program Files\Exelis\ENVI52\custom_code on Windows), or one of the other documented locations you can then use it.  Here is an example showing how:

  e = envi()
  file = FilePath('qb_boulder_msi', ROOT_DIR=e.ROOT_DIR, SUBDIR='data')
 
  oRaster = e.OpenRaster(file)
  oTask = enviTask('maskExtremePixels')
  oTask.INPUT_RASTER = oRaster
  oTask.THRESHOLD_PERCENT = 0.02
  oTask.DATA_IGNORE_VALUE = -1
  oTask.Execute
 
  oView = e.getView()
  oLayer = oView.CreateLayer(oTask.OUTPUT_RASTER)

Comments (0) Number of views (537) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

7

May

2015

Unit Testing in IDL

Author: Dain Cilke

One of the key aspects of developing any software is testing. Making sure the software behaves as you expect for a variety of inputs is crucial for creating robust and maintainable code. Testing in IDL can be a little tricky and will often require you implement your own system for maintaining tests. However, with a little framework this task can be a lot less daunting.

Before we get into the framework, it would be beneficial to read up on Test Driven Development and the concepts behind writing good tests. There are some great tutorials and overviews on the web and they really help to emphasize the importance of testing. Now, onto the code!

Let’s say we want to write a function CONVERT_TO_STRING. Since we are writing this function from scratch, let’s define the contract of the function. As input, it will take an IDL variable, convert it to a string with a custom format, and return it. Great, let’s write some tests

test_convert_to_string.pro:

pro test_convert_to_string_number

  compile_opt idl2

  on_error, 2

 

  input = 1

  expect = '1'

  result = convert_to_string(input)

 

  if result ne expect then begin

    message, 'Converting number failed.'

  endif

end

 

pro test_convert_to_string_null

  compile_opt idl2

  on_error, 2

 

  input = !NULL

  expect = '!NULL'

  result = convert_to_string(input)

 

  if result ne expect then begin

    message, 'Converting number failed.'

  endif

end

 

pro test_convert_to_string_object

  compile_opt idl2

  on_error, 2

 

  input = hash('a',1,'b',2,'c',3)

  expect = '{"c":3,"a":1,"b":2}'

  result = convert_to_string(input)

 

  if result ne expect then begin

    message, 'Converting number failed.'

  endif

end

 

pro test_convert_to_string

  compile_opt idl2

 

  print

  print, 'Testing suite for convert_to_string()'

end

Before any code is written we have our test case. The reason we can do this is because we defined the contract of the function. We know exactly what the function should take in as input and what the output should be. Now running this code can be a little tiresome to run so let’s setup some framework.

unit_test_runner.pro:

; Path – path to test directory

pro unit_test_runner, path

  compile_opt idl2

 

  if ~file_test(path, /directory) then begin

    message, 'Input must be a path.'

  endif

 

  test_files = file_search(path, 'test*.pro')

  resolve_routine, file_basename(test_files,'.pro'), /compile_full_file

  tests = routine_info()

 

  print

  print,'--------------------------------------------------------------------------------'

 

  error_count = 0

  for i=0, tests.length-1 do begin

    catch, errorStatus

    if (errorStatus ne 0) then begin

      catch, /cancel

      print, 'ERROR: ', !ERROR_STATE.msg

      i++

      error_count++

      continue

    endif

 

    if (tests[i]).startswith('TEST_') then begin

      call_procedure, tests[i]

    endif

  endfor

 

  print

  print,'--------------------------------------------------------------------------------'

  print

 

  if error_count gt 0 then begin

    print, 'Unit test failures on: ' + path

  endif else begin

    print, 'Unit tests pass.'

  endelse

 

end

Now all we have to do is give UNIT_TEST_RUNNER a path to our test files and it will run them!  Let’s get busy coding.

convert_to_string.pro:

; Input - IDL Variable

; Output - Custom string representation of the variable

function convert_to_string, var

  compile_opt idl2

 

  switch size(var,/TYPE) of

    0: begin

      return, '!NULL '

      break

    end

    11: begin

      if isa(var,'HASH') or isa(var,'DICTIONARY') or isa(var,'ORDEREDHASH') then begin

        return, json_serialize(var)

      endif

      break

    end

    else: begin

      return, strtrim(var,2)

    end

  endswitch

end

Now let’s run our test suite:

--------------------------------------------------------------------------------

 

Testing suite for convert_to_string()

% Compiled module: CONVERT_TO_STRING.

ERROR: TEST_CONVERT_TO_STRING_NULL: Converting !NULL failed.

 

--------------------------------------------------------------------------------

 

Unit test failures on: C:\convert_to_string

Oops! Our return for the !NULL case isn’t what we are expecting (good thing it’s an easy fix).

By developing software test first you are forced to think about the contract (inputs/outputs) of each function. By implementing unit tests against this contract, we can then use our new function with confidence. If everything has unit tests any problems which arise in the code are easily identified and fixed.

Note: Make sure each code segment is saved to a named file (names are given before the code) and all files are on your IDL PATH.

Comments (0) Number of views (419) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

30

Apr

2015

The Benefits of Using Include Files in IDL

Author: Benjamin Foreback

In IDL code, you can include the contents of another file by using the @ symbol followed by the filename. See Using Include Files in Routines for the syntax.

When this is used, the entire contents of the included file are literally dropped into place by the compiler. There are many uses for this, including error handling and logging, debugging, version tracking, and setting predefined variables. You can even use this technique to change COMPILE_OPT options for multiple routines at once.

Keep in mind that the content of an include file does not need to contain a full procedure or function, and it can be as short as a single line. Here are a few examples:

Error Handling and Logging

One of the most common uses of include files is for error handling; the file contains a catch block that handles and recovers from an error. Perhaps the error will also be logged to a file. Here is an example of a file called my_error_handler.pro, which can be placed at the beginning of any routine.

Catch, error
if error ne 0 then begin
  Catch, /CANCEL
  log_file = 'C:\algorithm_results\errors.txt'
  traceback = Scope_Traceback(/STRUCTURE)
  OpenW, unit, log_file, /GET_LUN
  Printf, unit, 'Error caught in ' + traceback[-1].ROUTINE + ', scope level ' + StrTrim(traceback[-1].LEVEL, 2)
  Printf, unit, !ERROR_STATE.msg
  Free_Lun, unit
  Message, /RESET
  result = Execute('Return', 1, 1)
  if ~result then begin
    result = Execute('Return, 0', 1, 1)
  endif
endif

The purpose of the execute calls is so that this file can be included in either a procedure or a function without worrying about compile errors based on whether a value needs to be returned.

Debugging

An include file can be useful for adding debugging statements. For instance, if you want the name of a function printed as IDL enters a routine, you could put this into an include file and include it at the beginning of all of your routines:

traceback = Scope_Traceback(/STRUCTURE)
pro_or_func = traceback[-1].IS_FUNCTION ? 'function' : 'procedure'
Print, 'Entering ' + pro_or_func + ': ' + traceback[-1].ROUTINE

The great thing about this is that when you are finished debugging, you don't need to edit every routine you've written to remove the debug statements; instead you can simply update the included file to be a blank file, and nothing will be compiled into your final code.

Version Tracking

If you publish multiple versions of code, and there are times that the code needs to know the version used, an include file provides an excellent way to store the version. This way, you will only ever need to update the version in a single place.

Here is an include file called my_code_version.pro that defines a version

version = 1.0
version_str = '1.0'

Here is an example of code that makes use of it:

@my_code_version
if version eq 1 then begin
  ; Code
endif else if version eq 1.1 then begin
  ; Different code
endif else if version eq 2 then begin
  ; Completely different code
endif else begin
  Message, 'Version ' +  version_str + ' is not supported.'
endelse

Predefined Variables

An include file is a good way to set predefined variables, such as constants, which may be used in many different routines. This may be preferable to using a common block.

Keep in mind that you may not need to do this for all of your desired constants because IDL offers many physical constants with the !CONST variable.

Setting COMPILE_OPT

I have also made use of include files to set the COMPILE_OPT options in many routines so that they are changeable. For example, when writing, debugging, and testing code, I often like to see the printouts of what routines are compiled at the time they are first called. However, when I am done writing code and start using it for processing real data, these printouts can be a bit of an annoyance. Therefore, I want to be able to add "hidden" to all of my compile opt statements. The easy way to do this is to write a one-line include file with the COMPILE_OPT statement, which can then be changed. The routines that I want to make use of this can use the include file instead of using COMPILE_OPT directly.

Note

A small word of caution with include files is that it might take slightly longer for code to compile because IDL needs to look for the include file in the path. However, as long as I don't get carried away with including too many separate files, I have never worried about this because the difference is extremely small. Additionally, if the final code is published in the form of savefiles rather than pro code, it doesn't matter because the savefiles are pre-compiled.

Comments (0) Number of views (446) Article rating: No rating

Categories: IDL Blog | IDL Data Point

Tags:

12345678910 Last

MOST POPULAR POSTS

AUTHORS

Authors

© 2015 Exelis Visual Information Solutions