Show simple item record

dc.contributor.authorFarmer, Robert G.en_US
dc.contributor.authorLeonard, Marty L.en_US
dc.contributor.authorHorn, Andrew G.en_US
dc.date.accessioned2013-07-04T18:43:39Z
dc.date.available2013-07-04T18:43:39Z
dc.date.issued2012-01en_US
dc.identifier.citationFarmer, Robert G., Marty L. Leonard, and Andrew G. Horn. 2012. "Observer Effects and Avian-Call-Count Survey Quality: Rare-Species Biases and Overconfidence." Auk 129(1): 76-86.en_US
dc.identifier.issn0004-8038en_US
dc.identifier.urihttp://dx.doi.org/10.1525/auk.2012.11129en_US
dc.identifier.urihttp://hdl.handle.net/10222/29215
dc.description.abstractWildlife monitoring surveys are prone to nondetection errors and false positives. To determine factors that affect the incidence of these errors, we built an Internet-based survey that simulated avian point counts, and measured error rates among volunteer observers. Using similar-sounding vocalizations from paired rare and common bird species, we measured the effects of species rarity and observer skill, and the influence of a reward system that explicitly encouraged the detection of rare species. Higher self-reported skill levels and common species independently predicted fewer nondetections (probability range: 0.11 [experts, common species] to 0.54 [moderates, rare species]). Overall proportions of detections that were false positives increased significantly as skill level declined (range: 0.06 [experts, common species] to 0.22 [moderates, rare species]). Moderately skilled observers were significantly more likely to report false-positive records of common species than of rare species, whereas experts were significantly more likely to report false-positives of rare species than of common species. The reward for correctly detecting rare species did not significantly affect these patterns. Because false positives can also result from observers overestimating their own abilities ("overconfidence"), we lastly tested whether observers' beliefs that they had recorded error-free data ("confidence") tended to be incorrect ("overconfident"), and whether this pattern varied with skill. Observer confidence increased significantly with observer skill, whereas overconfidence was uniformly high (overall mean proportion = 0.73). Our results emphasize the value of controlling for observer skill in data collection and modeling and do not support the use of opinion-based (i.e., subjective) indications of observer confidence. Received 13 June 2011, accepted 14 December 2011.en_US
dc.relation.ispartofAuken_US
dc.titleObserver Effects and Avian-Call-Count Survey Quality: Rare-Species Biases and Overconfidenceen_US
dc.typearticleen_US
dc.identifier.volume129en_US
dc.identifier.issue1en_US
dc.identifier.startpage76en_US
 Find Full text

Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record