Your Science Results Explained, Pt 2

May 20, 2014 | Citizen Science, Moon Mappers

In this series of posts, we’ll be breaking down the recent MoonMappers paper by Robbins et al. showcasing YOUR work. 

Read Part 1

In the first part of this series, I gave you a background overview on why accurate crater counts on surfaces like the Moon and Mercury are important. Now I’m gong to dive into the paper by Robbins et al. showcasing the MoonMappers results and comparing them to experts while comparing the experts to each other.

How Well Do We Mark Craters?

It seems at first glance that marking craters should be a straightforward task. I mean, it’s a circle in an image, right? If you’ve already been working with one of the Mappers projects, then you know that is not necessarily the case. Craters can be faint and eroded, or overlap, or look like other strange features like lunar caverns, or the angle of the Sun might be such that there’s very little contrast in the picture. After all, if it were THAT obvious, we probably could have programmed a very good algorithm by now, but it turns out they aren’t so good. (See Man vs. Machine for examples.)

However, research has been done for decades using craters as tools for inferring ages and other properties of planetary bodies. Few studies have actually looked at any internal consistency of crater marking. Robbins et al. lists four such studies in the introduction. On one, by Gault (1970) 20 participants used, get this, particle counters, to measure the sizes of circular craters on images. They found that crater counts for different people were within 20% of each other. However, in a size bin where there were only a few craters, say, the largest craters, counts could be off by a much higher percentage. (Curse you small number statistics!)

I'm having a heck of a time finding a photo of such a machine that would have been used by craer mappers of the 1960s and 70s. But this is a Stereotop analog stereo plotter from Zeiss in the 1950s, so it gives you a vague sense!

I’m having a heck of a time finding a photo of such a machine that would have been used by craer mappers of the 1960s and 70s. But this is a Stereotop analog stereo plotter from Zeiss in the 1950s, so it gives you a vague sense!

There was one more follow up study from the same lab in 1970 showing that even a single crater-marker could have different results on different days, or even hours, then it seems that no one saw fit to publish such comparisons until just a few years ago. With the amazing imaging and mapping speed of the Lunar Reconnaissance Orbiter (LRO), plus its ability to image the same spot on the Moon through various sun angles, the idea of investigating the method of crater counting itself came under scrutiny. In a short conference proceeding by Kirchoff et al. (2011), the experts showed to differ from each other by 20-40%, and a newbie to the process had a steep learning curve before their crater counts were reliable. (This is, if course, another lesson you may have already learned from using Mappers. In other words, hang in there!) Another study (Heisinger et al. 2012) found only 2% difference between their researchers, a pretty impressive margin but just shown for one study and two researchers, a husband and wife team.

Today’s Crater Mappers

So, this study was conceived to test several things. First, crater markers with a wide range of experience were tested against each other using their tools of choice. Eight crater-mappers in all, all co-authors on the paper, looked at the same image from the Moon’s surface. A particular image was chosen from the LRO Narrow Angle Camera which can see fine detail (2 meters) on the Moon. This particular image was chosen in part because the Sun was at a good angle for making strong shadows, thus making it a bit easier to see craters. Another image was shown to the experts from the Wide Angle Camera on LRO, a camera that sees a broader view of the terrain.

A Wide Angle Camera (WAC) image with a red box indicating the size of a NAC (Narrow Angle Camera) image. Credit: NASA/GSFC/Arizona State University

A Wide Angle Camera (WAC) image with a red box indicating the size of a NAC (Narrow Angle Camera) image. Credit: NASA/GSFC/Arizona State University

Our eight crater experts have a range of experience from as little as six years to as many as fifty. Stuart Robbins even has a permanent callus on his finger from making so many circles as a grad student. Each person was allowed to use their preferred software for marking the craters (sorry, no analog machines anymore). Even those that used the same software package used different tools in the package for the bulk of their work. One researcher used DS9, and no, that is not an acronym. Yes, that is a Star Trek reference. Robbins and Antonenko also used the MoonMappers interface for marking the images as well. Of course, you, the citizen scientists, used the MoonMappers interface, too!

Getting It Together

After getting all these various forms of crater marking done and piled into one place, the authors then had to make sure they were standardized so they could be compared with one another. One way to do this was to do all crater measurements in image pixels, rather than a physical size on the lunar surface. Notice that we don’t include the linear scale in the MoonMappers images, despite several requests. We keep it that way just to make sure we’re not adding an extra factor to bias anyone. That also keeps the interface as simple as possible. When it comes down to looking at an image in its own right, it is the pixels that matter anyway.

Then, a “clustering code” was developed to take all of the crater marks from all of the users to create a “reduced crater catalog.” This means that a code was developed to be able to tell when several people have marked the same crater and mark it as ONE crater. Essentially, the crater marks had to be close to each other and have very similar diameters in order to be called the “same” crater marked by both (and many) users.

In the next part of this series, we’ll take these data and look at the results from comparing experts to each other, and experts to citizen scientists.

This series of posts is about the recent CosmoQuest publication: Stuart J. Robbins, Irene Antonenko, Michelle R. Kirchoff, Clark R. Chapman, Caleb I. Fassett, Robert R. Herrick, Kelsi Singer, Michael Zanetti, Cory Lehan, Di Huang, Pamela L. Gay, The variability of crater identification among expert and community crater analysts, Icarus, Available online 4 March 2014, ISSN 0019-1035, http://dx.doi.org/10.1016/j.icarus.2014.02.022 (paywall)

FREE, open access preprint on arXiv – http://arxiv.org/abs/1404.1334

0 Comments

Trackbacks/Pingbacks

  1. Your Science Results Explained, Part 3 | CosmoQuest Blog - […] Read Part 1 and Part 2 […]

Got Podcast?

365 Days of Astronomy LogoA community podcast.

URL * RSS * iTunes

Astronomy Cast LogoTake a facts-based journey.

URL * RSS * iTunes * YouTube

Visión Cósmica LogoVisión Cósmica

URL * RSS

Escape Velocity Space News LogoEscape Velocity Space News
New website coming soon!
YouTube

Become a Patron!
CosmoQuest and all its programs exist thanks the generous donations of people like you! Become a patron & help plan for the future while getting exclusive content.