Estimating abundances of plants in the field is not straightforward and is prone to variation between observers. Ideally we would minimize this variation between observers (and increase repeatability of an observers estimates too). Further, estimates can be biased. Lastly, estimates should reflect uncertainty, but rarely do.
But it is not clear how to best train observers with these problems in mind. Often in practical training, students are presented with some aids, such as thinking about how many cells in a grid are occupied, or mockup images of different cover values. Informally, instructors provide feedback about whether judgments are high or low based on their own experience. Crucially, rarely do we know the truth: what is the actual abundance being estimated. If we knew that, perhaps we could provide students with useful feedback. Occasionally, multiple methods will be taught, with one held to be a ‘gold standard’, e.g., point quadrats, but these are estimates themselves.
As a teacher, I’m concerned that our current practice this isn’t good enough. I believe that I am not very good at estimation, and so passing on my (and our demonstrators’) judgements doesn’t seem right. As a researcher interested in analysing vegetation data I am just as troubled, because sensitivity of analyses is limited by such errors of judgement.
This is what makes the recent work of Bonnie Wintle really interesting. In some work that I participated in, students were first introduced to a method for describing the uncertainty of their subjective estimates. Using the 4 point elicitation, observers estimated abundances in trial plots. Then the observers were provided with feedback. Subsequently the observers estimates some more abundances, enabling an assessment of their change in performance due to the feedback.
The work tested whether the ‘wisdom of the crowds’ could provide useful feedback, in the absence of knowledge of the ‘truth’. Another experiment tested whether the form of the feedback (active or passive) mattered. Briefly, it did. But interested readers, should check out the paper. Calibration (active) feedback involves the observer determining the hit rate of the intervals they estimated.
What is the take away message for training?
Provide feedback in in a structured process in which trainees systematically assess their own performance. Critically, that self assessment needs active consideration of one’s own accuracy and overconfidence.