Hello, all! I'm sure this revelation will be quite obvious to a lot of people, but as it just struck me, I'd like to share it.
While attempting to answer an entirely unrelated question in the real world, I stumbled upon a neat website about ocular anatomy:Anatomy, Physiology & Pathology of the Human Eye. While I had learned most of it before, at one time or another, I had never quite grasped what seems now to be a terrible obvious point -- that primary colors are not absolutes, so therefore although they are intrinsic to our perception of color, there is nothing magical about them, and they will not help us to produce an image displaying "true" color.
Our entire world of color, including the three primary colors, the color wheel, and the idea of complementary colors, is defined not by any fundamental law of physics but by the anatomy of the macula. The macula is part of your retina, which, although it is part of your eyeball, is really part of your brain. The macula is the first step in the phenomenally complex image processing performed entirely unconsciously by our brains all the time. It features rods and cones, which are (respectively) sensitive to brightness and color. The cones come in three kinds, and perhaps unsurprisingly, these are red, green, and blue. Additive color theory (that used by digital cameras and your computer monitor alike) rely on these three primary colors. (Yellow is only a primary color in subtractive color theory.) So in order to get a really true color image, we need a digital camera sensitive to precisely the same three wavelengths. (This is an oversimplification, however. Our cones are not sensitive to exactly three wavelengths. The sensitivity of the three kinds of cones is best described by three bell curves, which you can find on this page about the macula.
But it doesn't stop there. Because we don't just take three filtered grayscale images and let our brains combine them. In fact, our maculas do a bit more work before letting the visual cortex cope with the signal. Specialized neurons called "opponent cells" in the macula compare the signal from a red and a green cone and feed the result to another opponent cell which compares the resulting signal with the signal from a blue cone. This is what gives rise to our perception of complementary colors, and is why we get such startling effects when we look at a picture painted entirely in two complementary colors. So in order to get a camera to capture exactly what our eyes send to our brains, then not only must it be restricted to the same major frequencies addressed by our cones, but it must also pre-process each pixel individually in the same way that our maculas do.
And that, of course, still isn't the whole problem. Because our eyes do not see exactly what we do in fact see. We see color best in the fovea, and although we perceive a great deal of color in our peripheral vision, we're actually nearly colorblind there. After the eyes are finished with the image, there is still a staggering amount of image processing that must be performed, and that processing is far beyond anything computers are capable of today.
So perhaps there is an irony in the fact that not even human eyeballs hooked up to a spacecraft could produce an image which would satisfy someone like Hoagland as to the "truth" of its color.