Your Science Results Explained, Part 5

Jan 5, 2015 | Citizen Science, Moon Mappers

In this series of posts, we’ll be breaking down the first MoonMappers paper by Robbins et al. showcasing YOUR work. 

Read Part 1Part 2Part 3, and Part 4

We’ve shown in this series of posts why counting craters is important, how it’s been done throughout the years, what the experts and volunteers had to say, and why location mattered. We’ll delve even deeper now into the data as the paper looks at agreement for individual craters, …

Individual Craters

There are hundreds of craters in these images alone, and many many more on the surface of the Moon. So it helps to select one small sample of craters to compare to each other to get a sense of what’s going on for the counting overall.

Each crater cataloged has an associated N, or number of experts or volunteers that have identified that particular crater. For the expert catalog, the minimum number of identifications, or cutoff is 5, so it had to be marked by at least 5 experts. (There are a total of 11 expert/interface combinations here.) In that catalog, fully 75% of the craters have at least 9 markings, so very few fall in the 5-8 range. The volunteer catalog, on the other catalog, has a cutoff for N of 7. That crater must have been marked at least seven times to make it into the catalog. Of that catalog, 50% identified by 11 volunteers or less, 60% by 12 or less, an onwards. This is a bit different from the experts where they agree on the majority, but volunteers are a bit more scattered and may tend to miss more.

From Figure 8 of the paper by Robbins et al.

From Figure 8 of the paper by Robbins et al.

Next, they looked at how much the measurements of crater diameters and locations differ within each population (volunteers and experts.) We call that the “spread.” The experts, when looking at NAC (Narrow Angle Camera) image, get the location pretty precisely and the spread in values doesn’t depend on crater size. The spread in their measurement of size gets slightly worse as craters get bigger (from 7 to 10%). To compare, volunteers looking at the NAC image show quite a bit of scatter at smaller (20 pixel sizes), but that levels out once you get to size of 50 pixels or so. Remember, if you’ve used the Mappers interface, it cuts you off at sizes smaller than 18 pixels, so something may be happening there, which is discussed in a later section.Also, strangely, the scatter in the x (horizontal) direction is slightly higher than for the vertical direction for the volunteers. We’re not sure what could cause this, but the shadows being in the horizontal direction might be a factor here. You can see this in the figure at right. The horizontal axis is the crater diameter and the vertical axis is that “spread” or difference in measurements from each other for location (blue symbols) and the diameter (red symbols). The filled circles are for the volunteers and the closed circles of experts.

Figure 9 from Robbins et al. The x-axis is the diameter of each crater derived from expert measurements, and the vertical from volunteer measurements. A one-to-one correlation is shown in the faint dotted line, and the derived fit to the correlation the heavy dashed line.

Figure 9 from Robbins et al.

The overall scatter for volunteers in location (10 %) and size (20%) are a bit bigger than for the experts (5-10%), as one might expect. But remember, we have a lot more volunteers than we have experts. When the expert catalog (with 889 craters) was compared with the volunteer catalog (with 813 craters) and the craters less than 18 pixels removed, there were 699 unambiguous matches between the two. That’s pretty good, as figure 9 from the paper shows at left. The x-axis is the diameter of each crater derived from expert measurements, and the vertical from volunteer measurements. A one-to-one correlation is shown in the faint dotted line, and the derived fit to the correlation the heavy dashed line.

Erosion: Things Change with Time

How erosion is classified in the MoonMappers Crater Decay App for Android

How erosion is classified in the MoonMappers Crater Decay App for Android

We often teach our students that erosion is something that happens plentifully on Earth but hardly at all on surfaces like the Moon. Though that is true, there is still some erosion of craters with time from other craters layering on and distributing dust and material on top of them, the material in the bowl of the crater slumping or falling over time due to gravity, and the darkening of the bright ejecta material that is dug out by the impact. Our crater mappers are probably already familiar with the sharp, distinct new craters and the harder to find old craters. So how does erosion affect how and whether craters are marked?

To do that, craters are classified into one of four “preservation states” as defined in the literature by Arthur et al. (1963), and described a bit cheekily in the MoonMappers Crater Decay App, shown above. The data show that the more degraded craters make up a smaller fraction of the volunteer data set than they do for the experts, indicating that the experts are a bit better than volunteers at identifying those mostly eroded craters. However, the difference is quite small, so volunteers still did a good job. The data also show that the experts probably disagree on the numbers of the largest craters because those are highly degraded, and thus more up for debate. However, crater degradation cannot account for a lack of medium-sized craters marked by volunteers. There must be some other reason, possibly psychological, why volunteers tend to focus on the smallest and biggest craters, but erosion is not the reason.

Is it possible to guess the preservation state of a crater based on the scatter in its size and location measurement? That doesn’t seem to be the case for volunteer classifications. The scatter in both are similar for all 4 states of preservation. The experts, however, do show higher scatter for size measurements when the craters are less preserved or more eroded, but agree more sharply when the craters are, well, sharper. So it looks like it’s a good idea to have citizen scientists continue to use the Android MoonMappers Crater Decay App when looking to sort by how eroded the craters are.

At the Minimum

Finally in this post, we’ll look at the smallest of the craters that are identified in the Lunar Reconnaissance Orbiter images. As shown in an earlier post, disagreement is more stark at smaller crater diameters.

First of all, the clustering code used to determine which marks belonged to one crater are affected by clipping the data set of the experts at 18 pixels to match the volunteers, just as a test. Some craters near 18 pixels will be measured as greater than 18 pixels by some experts, and less than 18 pixels by others, so a simple clipping willintroduce some unwanted artifacts. Good to know. So, for the science results, a safe place to look is at craters 21.5 pixels or larger, since the MoonMappers interface has a hard limit at 18 pixels.

Next, how does having a strict 18 pixel cutoff in the MoonMappers interface affect results? You know what I mean if you are familiar with Mappers – the red circle turns green once it’s “big enough” or at least 18 pixels, and only then lets you make a mark. Anything smaller won’t stay. I have some anecdotal observations from watching new users experience this over the years, in that people seem to make the mark just a little bit bigger if needed to make it stay, even if it is larger than the actual crater. This seems to match the data, where volunteers will mark many more craters than the experts did using their own tools at these small sizes. Even the experts when using MoonMappers found themselves falling prey to this effect, being so invested in finishing a crater mark once it is started, even if it is small. It’s something I’ve tried to be aware of since reading this paper, but I know I do it, too. Suffice to say having a minimum size cutoff at all will introduce this effect for small crater sizes, and we have to be wary of drawing conclusions without taking that into account.

Finally, a question that I am always asked at outreach events, how do you know you are DONE? When can you say you’ve gotten all the craters there are to get? Each person may have a slightly different standard for the smallest or most degraded thing that they will call a crater, so they can say they have “done” an image. We’ve already seen that if you set a hard minimum for the crater size, then you want to take craters slightly bigger than that with a grain of salt. The experts not using MoonMappers made sure they were complete down to 18 pixels by ensuring that they marked ones that were just smaller than that.

The experts all marked two WAC (Wide Angle Camera) images as well, and were asked to mark to diameters “you are comfortable identifying.” Each expert had their own technique for deciding when they were done, whether it was comfort through experience or looking at the distribution of craters marked afterwards to see if anything seemed missing. Some marked craters as small as 4 or 5 pixels. Whereas some were conservative and other over-estimated, there was no clear “winner” for which technique was best. “Gut feeling” results were similar to ones that had a more rigorous method. In the end, these results all warn caution against putting too much faith in the counts of the smallest craters in an image made by anyone with any tool.

We’ll finish this series with one more post about individual biases and a discussion of the overall results!

This series of posts is about the recent CosmoQuest publication: Stuart J. Robbins, Irene Antonenko, Michelle R. Kirchoff, Clark R. Chapman, Caleb I. Fassett, Robert R. Herrick, Kelsi Singer, Michael Zanetti, Cory Lehan, Di Huang, Pamela L. Gay, The variability of crater identification among expert and community crater analysts, Icarus, Available online 4 March 2014, ISSN 0019-1035, http://dx.doi.org/10.1016/j.icarus.2014.02.022 (paywall) 

FREE, open access preprint on arXiv – http://arxiv.org/abs/1404.1334

0 Comments

Trackbacks/Pingbacks

  1. Your Science Results Explained, The Wrap Up | CosmoQuest Blog - […] Read Part 1, Part 2, Part 3, Part 4, and Part 5 […]

Got Podcast?

365 Days of Astronomy LogoA community podcast.

URL * RSS * iTunes

Astronomy Cast LogoTake a facts-based journey.

URL * RSS * iTunes * YouTube

Visión Cósmica LogoVisión Cósmica

URL * RSS

Escape Velocity Space News LogoEscape Velocity Space News
New website coming soon!
YouTube

Become a Patron!
CosmoQuest and all its programs exist thanks the generous donations of people like you! Become a patron & help plan for the future while getting exclusive content.