Monday, May 13, 2013

The Search for Microfossils ~ Doomed to Failure?


I’ve been playing Airport Scanner, a iPhone game app which casts the user in the role of a TSA agent monitoring an airport X-ray machine. Streaming through the machine are various pieces of luggage (and the occasional fish). In some of the bags, hidden amid distracting clothes and electronics, are forbidden items, such as guns, machetes, bullets, bombs, crossbows, and bottles of wine. At the stage I’ve reached in the game, some bags have no illicit items, some have just one, and a few have two or more. Compounding the challenge of spotting offending items are additional distractions – the TSA agent is asked to fast-track flight crews and first class passengers through security, and he or she can earn bonus points for working quickly enough for flights to leave on time. In addition to alarms going off, penalties are imposed for allowing a bag through when it in fact contains something on the no-no list, or for flagging a bag as suspicious when it contains nothing verboten.

Though it’s an enjoyable way to fritter away a few minutes, I’m playing the game looking for answers to a very troubling question. First, a bit of context is in order.

Over many months this past year, I slowly worked my way through a packet of sandy matrix, looking for fossil shells from ostracodes (minute crustaceans).  Countless times, I poured out a little bit of the material from the packet, spreading it across a sorting tray, and then examined it under the microscope, and, with a damp, fine tipped brush, carefully lifted out the minute shells and put them onto slides.  After that, I poured the matrix from the tray into a small bottle.  Scrawled across the label on the bottle was the word “picked.”  In time, a wealth of fossil ostracode shells was painstakingly extracted from the sample, and all of the matrix had been transferred from packet to bottle.

With the sample packet was finally empty, I sat back.  Job done.  All picked.

Picked?  Picked, my . . . foot.

Sometime later, I had reason to look for a different kind of microfossil in a pinch of material from the bottle.  That simple action delivered a damaging blow to my self-esteem, my sense that I had some special skill for this task.  There, in the first search field I examined, sat a very real, very complete fossil ostracode shell.

Damn.

Okay, missing one isn’t a big deal, but, wait, what’s that over there?

I quickly realized that it wasn’t just one stray ostracode shell or a few.  The number of ostracode shells coming from the “picked” material began to mount – several dozen in the course of an afternoon.

I don’t know my error rate because I have not re-picked the sample (which may not happen) and, more significantly, I don’t have a reasonable count of the total number of fossil shells I found in the first place (there are thousands spread across many slides and mixed with some that others picked).  I also really don’t want to know.  Besides, as I have learned, I may be fighting the inevitable.

This issue of missing some shells wasn’t fatal to the project I was working on because it didn’t require a complete inventory of the ostracode fossils in the sample, but I was troubled, to say the least.  So, I set out to find out why my search had failed to spot that many fossil ostracode shells.

I don’t think it was my search strategy which I thought was sound (and, indeed, it may have been).  The sorting tray has a grid etched into it, allowing the searcher to scan deliberately and carefully through the matrix spread on the tray.  I moved from top to bottom in the first column, from bottom to top in the second column, and so on.  Then, having surveyed the entire tray in that fashion, I examined it from left to right along each row.  As a result, I looked at each grid in the tray twice, coming at it from a different side each time.

Also, I had control over the search field.  When I poured too much into the tray for a confident search, I returned that material to the sample packet and poured out a smaller amount, thus minimizing the total number of items in each search field that might distract me from my quarry.  Further, I could move material with my brush to check beneath, and beside, bits of quartz and mollusk shell on the tray.

Finally, I had all the time I needed to do the search.  No flight crews to move through quickly, no alarms sounding.  External distraction were not absent but were manageable.

The fruits of each search were fairly impressive, though I now suspect that was more a function of how many ostracode fossils were in the original matrix in the first place than of my talent at the task.

There’s a large body of psychological and neuroscience research focused on visual searches, exploring such topics as how are searches are accomplished, how the eyes and brain handle the activity, what improves the outcome of these searches, what depresses success rates.  Miguel Eckstein of UC Santa Barbara’s Department of Psychological and Brain Sciences, posits that “everyone searches all the time,” whether it’s for your car in a parking lot, the lock into which you want to insert your key, or the proper computer screen icon to click.  (Visual Search:  A Retrospective, Journal of Vision, Volume 11, Number 5, 2011.)

Some searching that people do carries real weight for the rest of us; a lot hinges on the success of their searches – think radiologists scanning a mammogram searching for tumors or . . . TSA agents monitoring X-ray machines in airports throughout the U.S.

In fact, Airport Scanner has been enlisted by Duke University’s Stephen Mitroff in support of his research on visual searching.  As Greg Miller writes in a recent Wired Magazine online science posting, titled Smartphone Game Tests Your Baggage-Screening Skills for Science (May 8, 2013),
People do more visual searches on the Kedlin Company’s Airport Scanner game in a single day than researchers could reasonably expect to observe in the lab in a year, says Stephen Mitroff of Duke University.  Mitroff is combing that torrent of data for clues to better training methods or changes in the workplace that could make doctors, baggage screeners, and other professional searchers better at their jobs.
Eckstein, Mitroff, and others are exploring the influence on search success of such factors as the relative frequency with which search fields contain targeted objects, the number of targeted objects in each search field, the searchers’ visual knowledge of the items being hunted, pressure to complete the task, and consequences of false positive and false negatives.

One issue, in particular, has raised concern about how well professionals are likely to accomplish critical searches.  Apparently, when the quarry is very rare, as they are in routine mammography or airport security, miss errors (failing to see the offending object) may increase dramatically.  In laboratory studies using volunteers, when half of the search fields contained a target item, the miss error rate was 7%, but when only 1% of the fields housed a target, the error rate rose to fully 30%.  (Jeremy M. Wolfe, et al., Rare Items Often Missed in Visual Searches, Nature, Brief Communications, Volume 435, May 26, 2005.)

The reason for this anomaly, according to researchers, is that searchers for rarely appearing targets come to closure faster than they do when the targets are relatively abundant.  It’s not that their sensitivity for detecting targets is diminished, rather,
[i]t appears that, when observers expect targets to be rare, they require less information to declare a bag free of weapons.  This approach is beneficial for the majority of images, but would increase the likelihood of declaring a target to be absent when a target is actually present.  (Michael J. Van Wert, Even In Correctable Search, Some Types of Rare Targets Are Frequently Missed, Attention, Perception, Psychophysics, Volume 71, Number 3, April 2009, p. 2.)
Further, this kind of result isn’t the product of a “naive” searcher who doesn’t have much experience with the search or knowledge of the target.  (Wolfe, Rare Items Often Missed.)

I liken this to a variant of the Where's Waldo? books (or Where's Wally? as Martin Handford originally named him).  In this variant, Waldo is not present in every illustration, but only in an unknown subset.  The question is when will the searcher decide that any particular illustration in which Waldo has not yet been located is, in fact, Waldo-free.  Apparently, he or she is likely to make that decision too soon.

But, is this what happened with my ostracode search?

Though the operational issue appears to be the same – when to stop searching – there are some critical differences.  My targets were not rare (indeed, they were much more abundant than the trials described in the research I’ve read), but they were more likely to be surrounded by many distractors, several dozen of them.  Perhaps it’s this latter factor that negatively affected my search.  The number of distractors in some of the studies I read barely reached double digits.  (This limited number makes sense, actually, because TSA baggage scanning is frequently the model being researched and there are only so many objects that can fit into a piece of luggage.)

Yes, I could have dealt with the myriad distractors by examining each and every bit of quartz, mica, broken mollusk shell, etc. in my sample, but, unless the objective were to inventory every ostracode fossil in the sample (which it wasn’t), that colossal investment of time and energy wasn’t worth it.

Pictured below is a search field similar to the ones I explored for those many months.  In this field, in addition to the distractors (the many bits of quartz, maybe some clay, the random shell fragments, and some interesting foraminifera shells) is a single ostracode shell.


Here’s the same picture with the microfossils identified.  The green arrow points to an ostracode shell, the red arrows to the attention-begging foraminifera (the red arrow on the far right is pointing to a foraminifera shell hiding under some matrix).


Clearly, investigating each object in every search field would have been too high a price to pay.  Nevertheless, it would seem that I came to closure too quickly, and I’m still not sure why.

One finding from the research probably has some bearing on the kind of ostracode shells I missed.  It suggests that I may have overlooked disproportionately fossil shells from ostracode genera and species that were uncommon in the matrix sample.  As a result, my collection of ostracode shells from my search is unlikely to reflect the diversity of ostracode genera and species actually present in the sample, instead it’s shifted toward the common types.  Wolfe and his colleagues report that, when observers were confronted with search fields containing different kinds of targets, they missed over half of the rare ones compared to just 11% of the common ones (Rare Items Often Missed, p. 439).  This wasn’t, they report, a function of ignorance about the targets.  I would argue that it may be that, in this instance, searchers’ expectations of what they would find affected the outcome.  Not expecting the uncommon types increases the odds of missing them – they just don’t register when they do appear.

In the final analysis, playing Airport Scanner and reading some of the visual search research suggest that it was certainly not surprising that I missed some fossil ostracode shells in this sample.  In the first place, I’m not very good at the game.  So, if performance in the game bears even the slightest relationship to actually hunting microfossils, I simply may be mediocre at collecting these fossils amid many distractors.  Though that's a possibility, I come away from the visual search literature convinced that, in some circumstances, misses are unavoidable, regardless of who's doing the searching.  For instance, in the range of search simulations reported by Wolfe et al., even the most successful search parameters still meant 7% of the targets escaped detection.  So, under some conditions, a degree of failure appears inevitable (absent a grain by grain inspection).  Cold comfort that.

And there’s this nasty afterthought.  Were I, in fact, to go back through all of the material in the “picked” bottle, I’d be examining material in which the targets would no longer be abundant; they’d probably appear relatively infrequently.  What does the literature tell me about that?  Perversely, my error rate in this second round may be significantly higher than it was initially.  Rats, doomed to failure.

No comments:

Post a Comment

 
Nature Blog Network