Volunteer-run cameras as distributed sensors for ... - CiteSeerX

0 downloads 0 Views 5MB Size Report
solutions for landscape scale camera trapping to effectively ... Introduction. One of the greatest modern challenges to ecologists is ...... Conway CJ (2011) Standardized North American marsh bird .... Sauer JR, Link WA, Fallon JE, Pardieck KL, Ziolkowski DJ Jr ... 1966–2011: summary analysis and species accounts. N Am.
Landscape Ecol DOI 10.1007/s10980-015-0262-9

RESEARCH ARTICLE

Volunteer-run cameras as distributed sensors for macrosystem mammal research William J. McShea . Tavis Forrester . Robert Costello . Zhihai He . Roland Kays

Received: 30 March 2015 / Accepted: 17 August 2015 Ó Springer Science+Business Media Dordrecht (outside the USA) 2015

Abstract Context Variation in the abundance of animals affects a broad range of ecosystem processes. However, patterns of abundance for large mammals, and the effects of human disturbances on them are not well understood because we lack data at the appropriate scales. We created eMammal to effectively cameratrap at landscape scale. Camera traps detect animals with infrared sensors that trigger the camera to take a photo, a sequence of photos, or a video clip. Through photography, camera traps create records of wildlife from known locations and dates, and can be set in arrays to quantify animal distribution across a Special issue: Macrosystems ecology: Novel methods and new understanding of multi-scale patterns and processes. Guest Editors: S. Fei, Q. Guo, and K. Potter. W. J. McShea (&)  T. Forrester Conservation Ecology Center, Smithsonian Conservation Biology Institute, National Zoological Park, 1500 Remount Rd., Front Royal, VA 22630, USA e-mail: [email protected] R. Costello Office of Education & Outreach, Smithsonian’s National Museum of Natural History, 10th & Constitution Av NW, Washington, DC 20560, USA

landscape. This allows linkage to other distributed networks of ecological data. Objectives Through the eMammal program, we demonstrate that volunteer-based camera trapping can meet landscape scale spatial data needs, while also engaging the public in nature and science. We assert that camera surveys can be effectively scaled to a macrosystem level through citizen science, but only after solving challenges of data and volunteer management. Method We present study design and technology solutions for landscape scale camera trapping to effectively recruit, train and retain volunteers while providing efficient data workflows and quality control. Results Our initial work with [ 400 volunteers across six contiguous U.S. states has proven that citizen scientists can deploy these camera traps properly (94 % of volunteer deployments correct) and tag the photos accurately for most species R. Kays North Carolina Museum of Natural Sciences, 11 West Jones Street, Raleigh, NC 27601, USA R. Kays Department of Forestry and Environmental Resources, North Carolina State University, Raleigh, NC 27695, USA

Z. He Department of Electrical and Computer Engineering, University of Missouri, 225 Engineering Building West, Columbia, MO 65211, USA

123

Landscape Ecol

(67–100 %). Using these tools we processed 2.6 million images over a 2 year period. The eMammal cyberinfrastructure made it possible to process far more data than any participating researcher had previously achieved. The core components include an upload application using a standard metadata format, an expert review tool to ensure data quality, and a curated data repository. Conclusion Macrosystem scale monitoring of wildlife by volunteer-run camera traps can produce the data needed to address questions concerning broadly distributed mammals, and also help to raise public awareness on the science of conservation. This scale of data will allow for linkage of large mammals to ecosystem processes now measured through national programs. Keywords Camera traps  Cyberinfrastructure  Standard metadata  Citizen science  eMammal  Macrosystem

Introduction One of the greatest modern challenges to ecologists is to provide mechanistic explanations for the local abundance of a given species (McGill 2010). This topic has received extensive investigation for a variety of taxa, with rapidly developing statistical advances used to identify relationships between the distribution of a species, resource availability and environmental variables. The effort has generated a host of studies identifying the chief factors regulating occurrence with respect to climate, habitat type, human disturbance and, to a lesser extent, interactions between species (Mladenoff and Sickley 1998; McShea 2014). Modelling species’ distributions with important covariates has seen incredible growth in the last decade through development of statistical techniques to analyze relationship between species’ detections and the environment (MacKenzie et al. 2003; Phillips and Dudı´k 2008; Ramsey et al. 2015). Each of these methods derives a statistical representation of a species’ distribution (e.g., probability of occurrence, occupancy, or density) which can be used to discriminate between key factors limiting animal distribution or relative abundance. One major limitation is that the broad distribution of many species results in studies being conducted in only a portion of their range, usually a single reserve or

123

ecosystem, and the models do not encompass the range of abiotic and biotic conditions encountered by the species. In order to model the impact of broadly distributed factors, such as climate or urbanization, it is preferential to base models on data derived from the broader environmental envelope across the range of the species (Elith and Leathwick 2009). Standardized surveys across landscapes have successfully expanded our predictive capacity for some taxa such as tree and bird species. For tree surveys, the distribution of forest types and the extent of their disturbance (i.e., fires, ice storms, hurricanes) can be estimated at large scales using remote sensing technology (Nagendra et al. 2013). One outcome of this survey tool is the prediction of canopy species responses to climate change, when combined with knowledge of species-specific physiology (Iverson et al. 2011). For bird populations, large scale data has been collected by volunteer teams working with standard protocols (Link and Sauer 1998; Gregory and Strien 2010; Conway 2011) or in ad hoc ways (Sullivan et al. 2009). Bird records from these efforts now number in the hundreds of millions, and a new cyberinfrastructure was required to manage the volunteers, data storage and delivery, quality control, and spatio-temporal trend analysis (Hochachka et al. 2012). These volunteer-based bird surveys have been instrumental in discovering population trends used for conservation efforts of forest, grassland and shore birds (Sauer et al. 2013; NABCI 2014). There is a need for similar data on mammal distributions because of their ecological and economic importance, but there are unique challenges due to the cryptic nature of most species: many are nocturnal, do not regularly produce detectable species-specific calls, and respond negatively to the presence of human observers (O’Connell et al. 2006). Detection of mammals via remote sensing is possible for only a few species that live in open habitats or oceans (Vermeulen et al. 2013). Traditionally large scale data for game species have been available only through a hunter/trapper permit system (Swenson et al. 1994) or through organized track and sign surveys (Linden et al. 1996; Gompper et al. 2006), both of which have wellestablished limitations. Detections based on track and sign are limited by observer differences in expertise, substrates that vary in their capacity to retain sign, and species-specific differences in their propensity to leave detectable sign (Wilson and Delahay 2001).

Landscape Ecol

Although most mammals cannot be detected from satellite imagery, like trees, or reliably seen by amateur nature lovers, like birds, many species are detectable via camera traps. Initially used for large carnivores (Karanth and Nichols 1998), camera traps can survey most terrestrial birds or mammals [100 g (Kays et al. 2011). Camera-based surveys do have limits (Gompper et al. 2006), but we consider these limits either measurable or less severe than those of other mammal survey methods. Camera traps have been used successfully for landscape scale surveys, returning photographs that are easily quantified and verified as biodiversity records (Kinnaird and O’Brien 2012; Li et al. 2012). Typical camera projects have dozens or a few hundred locations sampled by paid field staff. Single projects often lack sufficient detections of rare species and only by pooling multiple projects together can analysis be conducted (Lynam et al. 2013). We created the eMammal program to effectively camera-trap at landscape scales by increasing the number of deployments from dozens or hundreds to thousands. We have expanded the spatial and temporal scale of camera trap surveys by taking advantage of volunteers who enjoy deploying cameras and seeing the results. However, incorporating volunteers at the scales needed to address important conservation questions creates new challenges of study design, data management, volunteer training and organization. Here we outline our vision for large scale volunteer-placed camera trap surveys enabled through standardized metadata standards and customized software, database, and image analysis tools. We use as an example our experiences in developing eMammal; a data platform for collecting, organizing, and displaying camera-trap photographs and associated metadata collected by volunteers. The first end product of this data platform was to produce new knowledge on the distribution of mesocarnivores across multiple urban to rural gradients (Kays et al. 2015). We envision eMammal, or a similar distributed sensor array for large mammals, to complement national standard-data arrays being constructed for environmental metrics (Schimel and Keller 2015). This merging of remotely-sensed or ground-based environmental data with distribution data for key North American mammals, especially large carnivores (Ripple et al. 2014), will significantly advance our capacity to predict the response of animal communities under future climate and landuse conditions.

Volunteers as citizen scientists The first large-scale surveys with volunteers started at the turn of the twentieth century with the National Weather Service Cooperative Observer Program and Audubon’s Christmas Bird Count (Droege 2007; Butcher and Niven 2007) and citizen science is now recognized as a vital tool for large-scale ecological research (Dickinson et al. 2010, 2012). The data quality from these surveys has always been a concern to professional scientists (Cohn 2008), and a variety of best practices for quality assurance and quality control have been implemented. For example, most citizen science projects include volunteer training and assessment (Fore et al. 2001; Crall et al. 2011) and many use algorithms to identify observation outliers (Sullivan et al. 2009; Bonter and Cooper 2012). The most robust quality control measure is data vouchers that can be collected by amateurs and verified by experts (Schwartz et al. 2012). The advantage of volunteer projects based around camera-trapping is the photographic data serves as its own voucher for review by professionals. Appropriate design is also important and, in well-designed projects, volunteers have been shown to collect data on several taxa, including mammals, that match the quality of professional data (Fore et al. 2001; Newman et al. 2003; Droege 2007; Crall et al. 2011). However, adult volunteers tasked with difficult identification of plants or mammals (Crall et al. 2011; Moyer-Horner et al. 2012) or primary school students (Galloway et al. 2006; Delaney et al. 2007) were less accurate than professionals; but even in these studies simple and quantifiable measures were collected with high accuracy (Galloway et al. 2006; Delaney et al. 2007).

Camera traps as a survey tool Motion sensitive cameras, or camera traps, have automated the process of visually surveying mammal populations, and the technology has recently advanced enough to make them a general survey tool (Carbone et al. 2001). Camera traps have the advantage of detecting a wider range of species in a wider range of habitats than sign survey methods (O’Connell et al. 2006). The use of camera units shifts the measure of effort from an area examined (in the case of transects or sign plots) to the time and number of cameras that

123

Landscape Ecol

are active. In addition to changing the scale of measurement, the images voucher the presence of species at a specific location, are readily storable with associated metadata, allow a precise time record of an animal’s detection, and give precise measures of sampling effort (first and last photograph of camera deployment show length of survey). The reason that camera trapping has the potential to be crowd sourced at large scales is that volunteers consider it a rewarding activity. We surveyed volunteers before and after participation in the eMammal project using standard methods (derived from Brossard et al. 2005 and Crall et al. 2011). eMammal volunteers in 2012–2013 (n = 384) were equally excited for the opportunity to photograph wildlife (22 %), as they were to contribute to science (22 %). The digital pictures are easy to share on social media, and our volunteers readily share and compare photos. While eMammal does not have the same volunteer base as groups specifically focused on bird surveys (Sullivan et al. 2009), there are volunteer organizations that readily embrace projects focused on natural resources. eMammal staff have recruited volunteers from community groups such as Master Naturalists, The Sierra Club, Appalachian Trail Clubs, and ‘‘Friends of’’ groups associated with partner institutions.

Cyberinfrastructure Like other landscape-scale citizen science projects, eMammal faces cyberinfrastructure requirements that include data standardization, providing tools for amateurs to collect and process data, securing terabyte-scale data storage, integrating mechanisms that ensure data standards and quality, meeting the user experience and usability needs of both amateurs and experts within an interconnected data workflow system, and managing a large volunteer workforce. Data standardization Standardized metadata and field protocols are a prerequisite for any large survey. We created a data standard that was acceptable to the Smithsonian and partner institutions with major camera trapping research programs. This effort resulted in the formation of the Camera Trap Data Network (CTDN) with

123

founding members of the Smithsonian, the North Carolina Museum of Natural Sciences, Conservation International, and the Wildlife Conservation Society (www.wildlifeinsights.org). This group has technical, outreach, and data standard sub-committees to ensure the data input and output has the minimum information required for cross-project comparison, is properly structured, and attributes its sources. Data management The first product of our eMammal data platform was a survey of public forests across six states in the eastern USA (Kays et al. 2015). Each camera trap in this study collected an average of 50 animal detections (1–562 range) during a three-week deployment. This resulted in ca. 3000 animal detections in a typical park, and 98,189 animal detections from 2.6 million images during the 2 year study. Each of these images was collected, examined and tagged by a volunteer and verified by an expert. Processing these images without customized data management tools would have been difficult. However, using the eMammal data management system we were able to process and archive all these records within 2 months of their field collection. This experience has emboldened us to aim for even larger surveys that engage more of the public, and collect the data needed to examine broadly distributed species. The eMammal camera trap data management system (Fig. 1) is made up of four main components: (1) software for viewing, tagging, and uploading photographs, (2) an expert review step to ensure data quality, (3) an archive for approved data, and (4) a website for managing the study, including the participants, and accessing and analyzing the data. The photo viewing and tagging software is a desktop application called Leopold. Leopold was built on a commercial software development kit, Adobe Air, and is used by volunteers to manually process sequences of images stored on camera memory cards. Within Leopold, object recognition software can be toggled on to assist with finding animals within the pictures (see image recognition). Volunteers select from a dropdown menu of potential species to identify animals in each photo sequence, and count the number of animals in groups. These species tags are drawn from a list created for each project based on an intersection of the survey area with range maps from

Landscape Ecol

Fig. 1 Dataflow for eMammal. A Drupal-based website is used by staff to assign camera deployments and manage volunteers (Expert 1), with citizen scientists accessing their assignments either directly at website (CitSci 1) or through a desktop application (CitSci 2). Camera photos and required metadata are attached in the desktop application and uploaded to cloud site where the data are repackaged and stored for expert review. Expert review (Expert 2) allows for rejection and correction of

metadata and ‘‘favoriting’’ of photos. Data are then stored within SI data repository where it can be filtered and extracted through an application programming interface (API) at the website (Public 1) by either public, volunteers or conservation community. Copies of metadata, and in the future copies of favorite images, can be stored outside the repository firewall for rapid retrieval

all mammals and large birds of the world (www.maps. iucnredlist.org). After tagging, each sequence is compressed and uploaded for temporary cloud storage in Amazon S3. In the Amazon Web Services environment, these files are uncompressed and ingested into a MYSQL database that feeds the Expert Review Tool (ERT), a browser-based application that displays all sequences from each deployment. Project managers (PM) use the ERT as a data quality control mechanism to validate the correct setting of camera traps (i.e., not aimed to high or too low) and validate or correct species identifications and counts. Approved deployments are then downloaded through an automated batch ingest process to the Smithsonian’s data repository, a Fedorabased platform. For the sake of protecting the data, a copy is stored in Amazon’s Glacier and another copy is created within the data repository. From the repository the data can be requested through an application program interface (API) to the

eMammal website (http://emammal.si.edu) where it is automatically mapped and available for a set of standard statistical analyses through R scripts, or available as a raw data download. Data can also be exported to shared sites such as the CTDN. The majority of the data analysis involves the metadata associated with each image and not the image itself. Thus far, we have displayed just a few select camera trapping images on the eMammal website. Soon all image will be available through a separate API. Project management Scaling up any distributed ecology project involving camera traps, volunteers, or both, creates project management challenges that can also be addressed with cyberinfrastructure. The eMammal website was developed as a collaboration platform using Drupal Open Atrium. During the design and development phase, use of the eMammal software was restricted to

123

Landscape Ecol

the authors. The eMammal program now accepts projects from outside our group with PI agreements for covering data expenses in AWS. Storage and data processing costs in AWS are highly competitive, and the Smithsonian’s data repository provides secure, long-term storage while making the data publically accessible. Any researcher, agency or NGO staff can be assigned as PM by the Program Administrator once they accept the data provider and cost sharing agreements. Each PM can set up their own projects, add team members (volunteers or field assistants), camera deployment locations and create associated project metadata. Only the PM has access to the team member’s personal information. As mentioned earlier, each time a project is created a customized species list of all potential mammals and birds is created by intersecting the survey area with species range maps (IUCN, www.maps.iucnredlist.org) using an automated ArcGIS query. These lists are important for tagging camera-trapped images and for determining the threat status of a species, which is used to embargo the exact location data for sensitive species from public view. The PM can set additional location embargoes for non-endangered species of concern, such as hunted species, and can request an extended embargo of all metadata beyond the standard 6-month period. However, all projects that use the eMammal platform will have open and shared data once the embargo has expired. An exception to public sharing of data is the aforementioned locations for endangered species and a permanent embargo on photos of people (staff or public); these locations and photos are only accessible to the PM and the systems administrator. Although collecting photos of people on public land is not illegal in most countries, we consider posting photos of people online intrusive and unnecessary. To help the PM with team communication, the eMammal site provides a customizable ‘home page’ for each project. This page not only displays pictures, maps, and results from the project, but also bundles email tools, a blog, and a project discussion board (Fig. 2). Project home pages have some functions, such as the interactive map of deployments, visible only to team members (who must be logged in), while other features, such as the photo carousel and blog posts, are viewable to the general public. The site also provides software and supporting documents for download, such as permits, howto manuals, and training videos. Essentially, PMs use

123

the website to manage their entire team and camera trap deployments. Volunteers or field technicians use the site to register for a project, gather all the information they need to effectively deploy and manage camera traps, and communicate with each other and the PM.

Automated image analyses We have drawn from the field of computer vision to develop automated image analysis tools that make the processing of camera trap pictures more efficient (Fig. 3). Our procedure consists of three layers of image processing. At the object layer, we detect animals within a sequence of camera-trap images and isolate the animal body from the background vegetation. At the feature layer, our goal is to extract appearance, motion, and biometric features of the animal, such as body size, moving speed, entry angle, group size, etc. At the pattern layer, our goal is to create machine learning algorithms to automatically identify animal species. Within eMammal these are ‘‘add-on’’ features that can be used at the reviewer’s discretion and easily switched out as the algorithms are refined. Object level Identifying where animals are within the frame of a camera trap picture is a unique challenge. We set cameras to take a sequence of multiple images (e.g., 10) at every trigger, at 1 frame/second, effectively creating a low frame rate video clip. Our image analysis models the scene background and compares these images against each other to identify where the moving animal is within each frame. Compared with the surveillance video typically used for computer vision, camera trap videos have a very slow frame rate and often have a highly cluttered background environment, making this object identification more difficult. We formulated animal segmentation as an ensemble of image-level background-foreground object classifiers which fuse information across frames, and then refine their segmentation results in a collaborative and iterative manner (Ren et al. 2013; Fig. 4, left). Our experimental results and performance comparisons over a standard set of camera trap videos (Goyette et al. 2012) demonstrate that our

Landscape Ecol

Fig. 2 A typical project home page on the eMammal website, showing a rotating set of favorite photos, a project summary, and map of camera trap locations. The right sidebar shows the project details, blog, discussion board, help resources, and other tools

method outperforms various states-of-the-art algorithms (Fig. 4, right). We have integrated this animal segmentation tool into the Leopold application to help volunteers with their initial classification of images. Typically, the most time consuming part of manually processing images is finding the smaller animals in the frame or deciding that there is actually no animal in the frame. Within Leopold our animal segmentation tool draws a bounding box around the place in the picture where it detects motion. Feature level To date we have made the least progress at the feature level analysis. A key advance for camera trapping would be to create a density estimate for mammal

species (Rowcliffe et al. 2011). One proposed method (Rowcliffe et al. 2011) requires knowledge of the animal’s distance at detection, as well as angle and rate of movement. While theoretically possible, getting accurate measures has proven difficult, and this remains a research focus. Pattern level Once the probable animal is identified in the frame it can be the target of subsequent automated analysis to identify the species (Ren et al. 2013). We used image features based on linear spatial pyramid matching with sparse coding (Yang, et al. 2009). We then used Scale Invariant Feature Transform descriptors, dictionary learning, and linear multi-class Support Vector Machines (Chang and Lin 2011) for animal species

123

Landscape Ecol

Fig. 3 A framework for automated analysis of camera-trap images, with multiple information layers building on the information extracted from the lower level. The lowest level is the data collected by the camera unit and the highest level relies on an API constructed to parse the metadata for use by r scripts accessed through the public website

classification. On sample camera trap datasets with 18–25 animal species (a typical species suite for North American projects), we have achieved an average accuracy of 82 % (Yu et al. 2013). We defined accuracy as TP=TP þ FP; with TP indicating the true positive rate and FP the false positive rate. This accuracy does not yet approach that of an experienced reviewer; we are working to further improve accuracy by using advanced machine learning and data classification methods. One species we have been able to isolate with higher accuracy is humans. We found that a significant

Fig. 4 Left The background-foreground (animal) classification information are fused across frames to refine their segmentation results in a collaborative and iterative manner. Right our algorithm

123

portion of the camera-trap detections in some areas are humans. To reduce review time for these images we have developed a human-animal classification tool for Leopold. In this tool the moving pixels (objects) are extracted from the background, the image is further analyzed to determine if it is an animal or human. At each single image, our current human-animal classification accuracy is about 87 %. Since each trigger generates multiple images, we classify all images in a sequence and use a majority voting to further improve the classification accuracy to about 94 %. Note that these animal species recognition tools are designed to assist wildlife researchers by making work flow more efficient, but are not designed to replace people in the work flow. For example, the human classification algorithm flags all human pictures to reduce the number of images that need to be examined manually. Once developed, the species recognition algorithm could provide additional information to the expert reviewer to reduce the number of images they examine. By giving both an identification and a confidence measure, our goal is to let the computer recognize the set of common animal species which dominate the camera trap sequences and leave the remaining sequences, as well as those with high ambiguity levels, to the human reviewers.

Study design and data analysis Study design in citizen science projects reflects a balance between collecting data useful for addressing science questions, while also being within the skill

outperforms the state-of-the-art algorithms by[11 % in average precision when tested on a standard Change Detection Challenge Dataset (Goyette et al. 2012; see text for details)

Landscape Ecol

Volunteer data quality Camera traps create image vouchers for each observation, and experts reviewed all volunteer observations using our Expert Review Tool. In a recent project, we rejected 5 % of all camera placements for poor camera setup (i.e. aimed too high or too low), and 1.7 % for equipment failure. We informed volunteers with rejected deployments, which usually led to a rapid improvement; the rejection rate declined from 15 % during the first setup to 1 % by the third camera setup (Friedman test, v270 = 78.45, df = 2, p = \0.001). Thus, experienced volunteers are nearly perfect in sensor deployment. The second area where volunteer performance is important is in providing the initial species identification and number of animals for a photo sequence. Volunteers accurately ([90 % accuracy) identified 15 of 20 wildlife species, but were less accurate distinguishing sympatric species of foxes and squirrels (Fig. 5). In addition, the Expert

100% 90%

Volunteer Accuracy

level of volunteers. In the case of camera trap surveys, a primary task of study design is where to deploy cameras. Similar to other landscape ecology projects using volunteers, volunteers are either provided with specific protocols and locations for data collection (e.g., national breeding bird and Christmas bird count surveys), or volunteers are encouraged to supply opportunistic observations (e.g., eBird and the National Phenology Network). The second approach is more likely to collect data quickly, but it is more likely to be inappropriate for any given research question (Dunn et al. 2005; Hochachka et al. 2012). The eMammal data repository is a variation of the first approach; volunteers use set protocols and place cameras at sites that are pre-determined by a PM, but the study design likely varies between projects. Our initial projects were quite specific (e.g., 50 m from a trail in a park where hunting is forbidden), while a more recent study design has been more flexible (e.g., forest or backyard sites anywhere along an urban–rural gradient). The key to making various study designs comparable is to collect sufficient information within the metadata schema to allow comparisons across projects. These study design differences are captured in the project set-up by the PM.

80% 70% 60% 50% 40% 30% 20% 10% 0%

Fig. 5 The volunteer accuracy in identifying the 15 most commonly detected animals in the eMammal camera trap survey with 95 % confidence intervals. Dark gray bars represent accuracy rates [90 % while light gray bars are accuracy rates \90 %. All volunteer IDs were verified through expert review

Review Tool allowed us to identify about half of the detections judged ‘unknown animal’ by volunteers (e.g., 51 % of the 945 sequences in our last survey). These were typically photographs containing only part of an animal (e.g., tail or ear). Our approach focused on reviewing all data records, whereas other projects, including Project Feeder Watch (Bonter and Cooper 2012), have built algorithms to identify observations that are outliers given historical data. Both approaches ensure high quality data and are tailored to the particular type of data that is gathered (photos vs. observations). Algorithms can help projects deal with large-scale data, and eMammal is developing a combination of crowdsourcing and algorithms to validate the most common mammals recorded.

Future developments We have used volunteers with camera traps to survey 32 parks across six states and now see its potential to extend over larger areas, with more detailed coverage. These macrosystem scale mammal surveys are possible by a combination of volunteer management and efficient photo processing software, enhanced with automated image analysis. At the macrosystem scale, these studies cannot be executed by a single research group, but need to be coordinated between various

123

Landscape Ecol

scientists, wildlife managers, conservationists, educators, and community groups. Standardized data structure, as established through CTDN and eMammal, is a prerequisite to collaborative data collection. The best practices for camera trap study design have not yet been established, so this remains a priority for future research. To date we have had good success with volunteer recruitment and retention, reflecting the intrinsic enjoyment of setting camera traps and reviewing new animal photographs. In an online survey of previous eMammal volunteers 98.5 % (129 of 131) indicated they wanted to continue volunteering with the project. Our early efforts with online training are promising for future larger scale surveys, as it is just as effective as in-person training (5 % rejection of 2000 deployments for in-person training vs. 4.3 % rejection for 175 deployments for online training, Fishers exact test p = 1.0). Volunteer retention would be facilitated by the formation of a social network that allows communication within their local group, which have built into the eMammal website. We have embedded discussion posts and blogs within the project site to keep volunteers informed as to project progress and data use. Citizen scientists are rarely involved in the analysis stages of a project, but we think many of our volunteers would be motivated to explore the data if the appropriate tools were available. With some automation of the analysis steps, we believe volunteers will use the data for their own research questions, democratizing the data analysis and empowering and motivating the volunteers. We think that our present recruitment efforts have only begun to engage the potential pool of volunteers. Communities likely to be interested in this type of project include government wildlife agencies, hunting clubs and school systems. The natural resource agencies and the hunting community has a vested interest in monitoring and understanding large mammal populations, especially as they often cross boundaries between public and private lands, and camera trapping gives them an alternative to the traditional analysis of harvest data. School systems need to apply STEM learning in a context that students can find relevant and local wildlife may provide an avenue of interest for some students. Loss et al. (2015) found that citizen scientists are best utilized in hypothesis-driven research; this is possible with distributed camera networks. We encourage the

123

development of hypothesis-driven projects that draw across the continent for volunteers. As a caveat, these avenues to support camera-trap surveys are applicable to the US and other developed countries. We are still searching for the funding mechanism to support camera trap surveys by volunteers in the developing world. Our success at volunteer recruitment indicated that setting cameras in the field is no longer the limitation for collecting data, instead, the expert review process has become the new bottleneck for broad-scale mammal surveys. We are working on a number of ways to make this process more efficient, including crowd sourcing species identification (e.g., see Snapshot Serengeti at www.zooniverse.org/projects), and automated computer vision image analysis (Yu et al. 2013). The key attributes for a distributed sensor system across broad landscapes are a standardized tool (i.e., camera trapping) and data structure (i.e., CTDN), a large labor staff (i.e., citizen scientists), and an efficient data flow software system (i.e., eMammal). All those features are now in place for cameratrapping of wildlife via citizen scientists using eMammal and the program is being used by at least 5 organizations outside of this research group. These joined projects are now suitable for metrics, such as Wildlife Picture Index (O’Brien et al. 2010), which are difficult at small sample sizes. To a large extent, the attributes of this system match those needed for a coordinated distributed experiment (Fraser et al. 2013). Fraser et al. (2013) advocated that more than meta-analyses are needed for macroecology projects because of the limited comparability between experiments conducted under different protocols and aims. The distributed sensor system outlined in this paper fits all their requirements of shared data standards, ease of use and broad extent, but often lacks a common experimental hypothesis. An example of what is possible with eMammal would involve several North American mammals, such as white-tailed deer (Odocoileus virginianus) and cougar (Felix unicolor) that are distributed from Canada to South America. Understanding how predator–prey interactions are impacted by habitat type, urbanization and productivity would greatly benefit from data collected across their entire ranges. Whereas NSF has constructed a distributed network for standard-format environmental data (Schimel and Keller 2015), the potential to

Landscape Ecol

map large mammals across the same scale would link critical mammal populations (such as those of large carnivores; Ripple et al. 2014) to ecosystem productivity and diversity measures. Although more advancement is needed to scale globally, the system is ready now in North America to address important issues in macroecology.

References Bonter DN, Cooper CB (2012) Data validation in citizen science: a case study from project FeederWatch. Front Ecol Environ 10:305–307 Brossard D, Lewenstein B, Bonney R (2005) Scientific knowledge and attitude change: the impact of a citizen science project. Int J Sci Educ 27:1099–1121 Butcher GS, Niven DK (2007) Combining data from the christmas bird count and the breeding bird survey to determine the continental status and trends of North America birds. National Audubon Society, Ivyland Carbone C, Christie S, Conforti K, Coulson T, Franklin N, Ginsberg JR, Griffiths M, Holden J, Kawanishi K, Kinnaird M, Laidlaw R, Lynam A, Macdonald DW, Martyr D, McDougal C, Nath L, O’Brien T, Seidensticker J, Smith DJL, Sunquist M, Tilson R, Wan Shahruddin WN (2001) The use of photographic rates to estimate densities of tigers and other cryptic mammals. Anim Conserv 4:75–79 Chang C-C, Lin C-J (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):27 Cohn JP (2008) Citizen science: can volunteers do real research? Bioscience 58:192–197 Conway CJ (2011) Standardized North American marsh bird monitoring protocol. Waterbirds 34:319–346 Crall AW, Newman GJ, Jarnevich CS, Stohlgren TJ, Waller DM, Graham J (2011) Assessing citizen science data quality: an invasive species case study. Conserv Lett 4:433–442 Delaney DG, Sperling CD, Adams CS, Leung B (2007) Marine invasive species: validation of citizen science and implications for national monitoring networks. Biol Invasions 10:117–128 Dickinson JL, Zuckerberg B, Bonter DN (2010) Citizen science as an ecological research tool: challenges and benefits. Ann Rev Ecol Evol Syst 41:149–172 Dickinson JL, Shirk J, Bonter D, Bonney R, Crain RL, Martin J, Philips T, Purcell K (2012) The current state of citizen science as a tool for ecological research and public engagement. Front Ecol Environ 10:291–297 Droege S (2007) Just because you paid them doesn’t mean their data are better. In: Proceedings, citizen science toolkit conference. Cornell Laboratory of Ornithology. www. birds.cornell.edu/citscitoolkit/conference/proceeding-pdfs Dunn EH, Francis CM, Blancher PJ, Drennan SR, Howe MA, Lepage D, Robbins CS, Rosenberg KV, Sauer JR, Smith KG (2005) Enhancing the scientific value of the christmas bird count. Auk 122:338–346

Elith J, Leathwick JR (2009) Species distribution models: ecological explanation and prediction across space and time. Ann Rev Ecol Evol Syst 40:677–697 Fore LS, Paulsen K, O’Laughlin K (2001) Assessing the performance of volunteers in monitoring streams. Freshw Biol 46:109–123 Fraser LH, Henry HAL, Carlyle CN, White SR, Beierkuhnlein C, Cahill JF Jr, Casper BB, Cleland E, Collins SL, Dukes JS, Knapp AK, Lind E, Long R, Luo Y, Reich PB, Smith MD, Sternberg M, Turkington R (2013) Coordinated distributed experiments: an emerging tool for testing global hypotheses in ecology and environmental science. Front Ecol Environ 11:147–155 Galloway AWE, Tudor MT, Haegan WMV (2006) The reliability of citizen science: a case study of oregon white oak stand surveys. Wildl Soc Bull 34:1425–1429 Gompper ME, Kays RW, Ray JC, Lapoint SD, Bogan DA, Cryan JR (2006) A comparison of non-invasive techniques to survey carnivore communities in Northeastern North America. Wildl Soc Bull 34:1142–1151 Goyette N, Jodoin PM, Porikli F, Konrad J, Ishwar P (2012) Changedetection.net: A new change detection benchmark dataset. In: Proceedings of the IEEE workshop on change detection (CDW12) at CVPR12, 2012 Gregory RD, Strien AV (2010) Wild bird indicators: using composite population trends of birds as measures of environmental health. Ornith Sci 9:3–22 Hochachka WM, Fink D, Hutchinson RA, Sheldon D, Wong WK, Kelling S (2012) Data-intensive science applied to broad-scale citizen science. Trends Ecol Evol 27:130–137 Iverson LR, Prasad AM, Matthews SN, Peters M (2011) Lesson learned while integrating habitat, dispersal, disturbance, life-history traits into species habitat models under climate change. Ecosystems 14:1005–1020 Karanth KU, Nichols JD (1998) Estimation of tiger densities in India using photographic captures and recaptures. Ecology 79:2852–2862 Kays R, Tilak S, Kranstauber B, Jansen PA, Carbone C, Rowcliffe MJ, Fountain T, Eggert J, He Z (2011) Monitoring wild animal communities with arrays of motion sensitive camera traps. Intern J Res Rev Wireless Sensor Networks 1:19–29 Kays R, Costello R, Forrester T, Baker MC, Parsons AW, Kalies EL, Hess G, Millspaugh JJ, McShea W (2015) Cats are rare where coyotes roam. J Mammal xx(x):1–7. doi:10.1093/ jmammal/gyv100 Kinnaird MF, O’Brien TG (2012) Effects of private-land use, livestock management, and human tolerance on diversity, distribution, and abundance of large African mammals. Conserv Biol 26:1026–1039 Li S, McShea WJ, Wang D, Lu Z, Gu X (2012) Gauging the impact of management expertise on the distribution of large mammals across protected areas. Div Distrib 18:1166–1176 Linde´n H, Helle E, Helle P, Wikman M (1996) Wildlife triangle scheme in Finland: methods and aims for monitoring wildlife populations. Finn Game Res 49:4–11 Link WA, Sauer JR (1998) Estimating population change from count data: application to the North American Breeding Bird Survey. Ecol Appl 8:258–268 Loss SR, Loss SS, Will T, Marra PP (2015) Linking place-based citizen science with large-scale conservation research: a

123

Landscape Ecol case study of bird-building collisions and the role of professional scientists. Biol Conserv 184:439–445 Lynam AJ, Jenks KE, Tantipisanuh N, Chutipong W, Ngoprasert D, Gale GA, Steinmetz R, Sukmasuang R, Bhumpakphan N, Grassman LI Jr, Cutter P, Kitamura S, Reed DH, Baker MC, McShea W, Songsasen N, Leimgruber P (2013) Terrestrial activity patterns of wild cats from camera-trapping. Raffl Bull Zool 61:407–415 MacKenzie DI, Nichols JD, Hines JE, Knutson MG, Franklin AB (2003) Estimating site occupancy, colonization, and local extinction when a species is detected imperfectly. Ecology 84:2200–2207 McGill BJ (2010) A test of the unified neural theory of biodiversity. Nature 422:881–885 McShea WJ (2014) What are the roles for species distribution models in conservation planning? Environ Conserv 41:93–96 Mladenoff DJ, Sickley TA (1998) Assessing potential gray wolf restoration in the northeastern United States: a spatial prediction of favorable habitat and potential population levels. J Wildl Manag 62:1–10 Moyer-Horner L, Smith MM, Belt J (2012) Citizen science and observer variability during American pika surveys. J Wildl Manag 76:1472–1479 Nagendra H, Lucas R, Honrado JP, Jongman RH, Tarantino C, Adamo M, Mairota P (2013) Remote sensing for conservation monitoring: assessing protected areas, habitat extent, habitat condition, species diversity, and threats. Ecol Indic 33:45–59 Newman C, Beusching CD, Macdonald DW (2003) Validating mammal monitoring methods and assessing the performance of volunteers in wildlife conservation-‘‘Sed quis custodiet ipsos custodies?’’. Biol Conserv 113:189–197 North American Bird Conservation Initiative, and US Committee (2014) The State of the Birds 2014 Report. US Department of Interior, Washington, DC 16 pages O’Brien TG, Baillie JEM, Krueger L, Cuke M (2010) The Wildlife Picture Index: monitoring top trophic levels. Anim Conserv 13:335–343 O’Connell AF Jr, Talancy NW, Bailey LL, Sauer JR, Cook R, Gilbert AT (2006) Estimating site occupancy and detection probability parameters for meso-and large mammals in a coastal ecosystem. J Wildl Manag 70:1625–1633 Phillips SJ, Dudı´k M (2008) Modeling of species distributions with Maxent: new extensions and a comprehensive evaluation. Ecography 31:161–175

123

Ramsey DSL, Caley PA, Robley A (2015) Estimating population density from presence-absence data using a spatially explicit model. J Wildl Manag 79(3):491–499 Ren X, Han T, He Z (2013) Ensemble video object cuts in highly dynamic scenes. In: International conference on computer vision and pattern recognition (CVPR), Portland, June 2013 Ripple WJ, Estes JA, Beschta RL, Wilmers CC, Ritchie EG, Hebblewhite M, Berger J, Elmhagen B, Letnic M, Nelson MP, Schmitz OJ, Smith DW, Wallach AD, Wirsing AJ (2014) Status and ecological effects of the world’s largest carnivores. Science. doi:10.1126/science.1241484 Rowcliffe MJ, Carbone C, Jansen PA, Kays R, Kranstauber B (2011) Quantifying the sensitivity of camera traps using an adapted distance sampling approach. Meth Ecol Evol 2:467–476 Sauer JR, Link WA, Fallon JE, Pardieck KL, Ziolkowski DJ Jr (2013) The North American Breeding Bird Survey 1966–2011: summary analysis and species accounts. N Am Fauna 79(79):1–32 Schimel D, Keller M (2015) Big questions, big science: meeting the challenges of global ecology. Oecologia 177:925–934 Schwartz MD, Betancourt JL, Weltzin JF (2012) From Caprio’s lilacs to the USA National Phenology Network. Front Ecol Environ 10:324–327 Sullivan BL, Wood CL, Iliff MJ, Bonney RE, Fink D, Kelling S (2009) eBird: a citizen-based bird observation network in the biological sciences. Biol Conserv 142:2282–2292 Swenson JE, Sandegren F, Bja¨rvall A, So¨derberg A, Wabakken P, Franze´n R (1994) Size, trend, distribution and conservation of the brown bear Ursus arctos population in Sweden. Biol Conserv 70:9–17 Vermeulen C, Lejeune P, Lisein J, Sawadogo P, Bouche´ P (2013) Unmanned aerial survey of elephants. PLoS ONE 8(2):e54700 Wilson GJ, Delahay RJ (2001) A review of methods to estimate the abundance of terrestrial carnivores using field signs and observation. Wildl Res 28:151–164 Yang J, Yu K, Gong Y, Huang T (2009) Linear spatial pyramid matching using sparse coding for image classification. In: International conference on computer vision and pattern recognition (CVPR) Yu X, Wang J, Kays R, Jansen PA, Wang T, Huang T (2013) Automated identification of animal species in camera trap images. EURASIP J Image Video Proc 2013:52

Suggest Documents