Auxiliary material for The effect of trees on

0 downloads 0 Views 858KB Size Report
The auxiliary material consists of a text file (“text01.docx”) with a detailed explanation of the subsection 2.4 ... 2013WR015197-pS01.jpg: Figure S1. Picture of the ...
Auxiliary material for The effect of trees on preferential flow and soil infiltrability in an agroforestry parkland in semiarid Burkina Faso Bargués Tobella, A.1*, Reese, H.2, Almaw, A.1, Bayala, J. 3, Malmer, A.1, Laudon, H.1, Ilstedt,U.1 1

Swedish University of Agricultural Sciences (SLU), Department of Forest Ecology and Management, SE-90183 Umeå, Sweden. 2

Swedish University of Agricultural Sciences (SLU), Department of Forest Resource Management, SE-90183 Umeå, Sweden.

3

World Agroforestry Centre (ICRAF), West and Central Africa Regional Office, Sahel Node, BP E5118, Bamako, Mali.

Water Resources Research

Introduction The auxiliary material consists of a text file (“text01.docx”) with a detailed explanation of the subsection 2.4 of the main text “Image processing and classification”, and two supplementary figures, Figure S1 (“2013WR015197-pS01.jpg”) and Figure S2 (“2013WR015197-pS02.eps”).

Files description text01.docx: Detailed description of the methodology used to process and classify the pictures of the soil profiles. 2013WR015197-pS01.jpg: Figure S1. Picture of the custom-made camera supporting device used to take the pictures of the dyed profiles. 2013WR015197-pS02.eps: Figure S2. Raw runoff data for the eighteen rainfall simulations. Each row corresponds to a transect, and the name of the transect is shown. Three of the

transects were located in small open areas (indicated by the S letter) and three in large open areas (indicated by the L letter). Each column corresponds to one of the three positions within a transect: Center of the open area (Open), under a single Shea tree (Tree), under a Shea trees associated with a termite mound (Termite).

text01.docx

Image processing and classification The pictures from the different soil sections were processed and classified using ERDAS Imagine v9.3 image processing software (Erdas Inc., Atlanta, Georgia, USA). In the first step, geometric distortion of the images was corrected, so that each image in a sampling area would correspond to the same 50 × 50 cm area [Lillesand et al., 2008]. This was carried out by determining a correct pixel size for all images by dividing the inner dimensions of the frame (500 mm to a side) by the number of pixels in columns and rows, enclosed within the frame of each individual image. From this an average pixel size was determined (omitting four outlier images from the pixel size calculation), resulting in 0.206 × 0.206 mm pixels. For each sampling area, the image that had the inner frame size most similar to the average was chosen as a reference image. All other images in the sampling area were then geometrically corrected to the reference image (so-called “image to image rectification”; [Lillesand et al., 2008]). Eight control points, four taken at the frame’s corners and the other four in the midpoints of the frame, were located both in the reference image and in the image being corrected. This established a mathematical relationship (using a second order polynomial) between the locations of the control points in the reference image and the distorted image. Bilinear interpolation resampling, which assigns the average values of the four nearest pixels to the new pixel location in the corrected image, was used to transform the distorted image to a geometrically correct image. Finally, all images in the sampling area were clipped to the same area measuring 2428 × 2428 pixels in size (i.e., 500 × 500 mm) and used for further analysis.

In the second step, the clipped images were classified into dye-stained and non-stained classes using a supervised classification [Lillesand et al., 2008]. Classification is the process that

categorizes image pixels into discrete thematic classes using statistical decision rules, and is commonly used for creating maps from earth observation images [Lillesand et al., 2008]. The spectral signatures of the image that represent each of the information classes (dye-stained and non-stained) were digitized and used as examples or "training areas" for a maximumlikelihood classification [Lillesand et al., 2008]. These training areas show the computer program what the different classes of the image look like spectrally and afterwards the classification algorithm assigns all image pixels into classes according to their spectral signature. Generally, five training areas per class were carefully manually selected, though in some complex images extra training areas were defined. In addition, the training area signatures were observed in spectral feature space. If training area signatures for the two classes overlapped, these were removed and the image was classified again using a new set of training areas. The classification results were assessed visually without use of an independent "ground truth" dataset. However, the strong contrast between the appearance of the blue-dyed soil and the unaffected soil, shown by the fact that training area signatures did not overlap, resulted in easily differentiated classes with high likelihood of being correct.

After producing binary images (dye-stained vs. non-stained) from the infiltration profile images, roots and black areas were manually digitized and classified into separate classes. In the last image processing step, we executed a generalization (“clump and sieve” in ERDAS Imagine) of the images where areas having less than 10 contiguous pixels were removed to reduce so-called “salt-and-pepper” noise and isolated small areas of pixels [Lillesand et al., 2008] .

References Lillesand, T. M., R. W. Kiefer, and J. W. Chipman (2008), Remote sensing and image interpretation, xii + 756 pp., John Wiley & Sons Ltd, Chichester, UK.

2013WR015197-pS01.jpg

2013WR015197-pS02.eps

Suggest Documents