Your Science Results Explained, Part 4

Dec 9, 2014 | Moon Mappers

In this series of posts, we’ll be breaking down the first MoonMappers paper by Robbins et al. showcasing YOUR work. 

Read Part 1Part 2, and Part 3

We’ve been exploring the results of the first big MoonMappers study with a series of blog posts that break down the science. First we discussed why we care so much about crater counts in the first place, then the history of crater counting and why it was important to explore how different experts and volunteers made their counts. Most recently, we delved into the results that showed how citizen volunteers stack up against the experts. Today, we’ll delve more into the counts, statistics, and individual methods.

Location, Location, Location

Two images were used in this study, one from the Narrow Angle Camera (NAC) and another from the Wide Angle Camera (WAC). The latter had two examples of surface types: older highlands and younger mare. Craters on different terrain or with different levels of weathering over time will appear different to each observer, possibly creating some difference in the counting. Craters also look different depending on their size. Most small craters have a very simple bowl shape, but larger crates have more complex morphologies and thus might be viewed differently by observers. The WAC gives us a view of the largest craters as well as the small ones. The craters in the NAC image show a wide range of “modification” or weathering, which may account for crater counts that can differ by 20-40% of the average. The WAC image, however, shows a “best case” scenario where the craters in the mare region are well-defined, showing less dispersion in the crater counts. The older, rougher highlands in the WAC image, however, still have  large dispersion in crater counts. This is important to know since we won’t always have the optimal counting conditions for every piece of terrain and landscape in the Solar System, or even just on the Moon.

So, since there is no single “correct” answer to the crater count, we need to find the “best” answer available to us using statistics. The authors decided to compare each expert’s classification to one another’s and to the volunteers with a statistical method called a Kolmogorov-Smirnov test, or K-S test for short. This tells you the probability that two samples came from the same parent distribution. In this case, that means the counts of each person or group of people match well enough to each other to be considered consistent. This turns out to be the case for most pairs, but some stray fairly far from each other. Why?

Robbins went back to the other experts to ask how they did their crater counts, plus he looked at the computational tools each used for the counts. This explained a lot of the inconsistencies. In one case, the software being used limited crater sizes to an integer number of meters, thus limiting how precisely the counters could make measurements. Another counter explicitly left out craters that were highly weathered, and that is reflected in the lower counts. One very large stray crater was probably a mis-count do to the limitation of the image, so seeing the larger picture may be helpful in those cases. One expert even noted that he was not typically used to terrains on the Moon, and was probably a more conservative counter as a result. This was particularly true on the heavily cratered and weathered lunar highlands, which, again, show the least agreement among all the groups. However, the crater counters were aware that those were the most difficult cases to count.

All of these differences and personal biases factor into crater counting not only among experts, but also among you, the citizen scientists.  This kind of disagreement among experts had not been explored to this level of detail until now, which as we described in Part 1, and it is important to understand before we can see how citizen scientists compare with experts.  Before now, most in the planetary science community generally assumed that a crater population counted by one researcher would be the same as the population found by another, and so this work characterizing these results on different types of terrain is important because it shows that assumption is wrong.

This series of posts is about the recent CosmoQuest publication: Stuart J. Robbins, Irene Antonenko, Michelle R. Kirchoff, Clark R. Chapman, Caleb I. Fassett, Robert R. Herrick, Kelsi Singer, Michael Zanetti, Cory Lehan, Di Huang, Pamela L. Gay, The variability of crater identification among expert and community crater analysts, Icarus, Available online 4 March 2014, ISSN 0019-1035, http://dx.doi.org/10.1016/j.icarus.2014.02.022 (paywall) 

FREE, open access preprint on arXiv – http://arxiv.org/abs/1404.1334

0 Comments

Got Podcast?

365 Days of Astronomy LogoA community podcast.

URL * RSS * iTunes

Astronomy Cast LogoTake a facts-based journey.

URL * RSS * iTunes * YouTube

Visión Cósmica LogoVisión Cósmica

URL * RSS

Escape Velocity Space News LogoEscape Velocity Space News
New website coming soon!
YouTube

Become a Patron!
CosmoQuest and all its programs exist thanks the generous donations of people like you! Become a patron & help plan for the future while getting exclusive content.