Picked up this thread in the other string, and wanted to explore it more with anyone interested. Is intelligence leading to technological civilizations really a liability? Does an examination of homo sapiens really show us to be a self-destructive intelligence? Does it really make an species more likely to cease to exist through self-destructive behavior? And finally, is this an important factor when considering any chances of contacting other civilizations or does it even apply at all to any "others" out there?
I'll take the con position, because when you add everything up I just don't buy it.
I definitely see why some folks get into the argument that intelligence and technology may be a limiting factor in the lifespan of any giving species and its civilization. As technology becomes ever more complex the likelyhood that something truly dangerous will be created also increases. The key here is "likelihood" and "created", not "used". In fact, I contend that though the dangers of technology are not always clear and quite often perilous (see nuclear weapons), intelligence will far more often prevent or mitigate the use of these technologies than result in disaster.
Have the developments of nuclear, chemical, and biological weapons and other planet impacting technologies increased the risk of war, accident, or short-sightedness wiping out most, or even all, of humanity? Surely. But the same intelligence that helped create these things also helps us to foresee the dangers and, hopefully, avoid them. As it appears we have so far.
"So far" I understand is the operative phrase there, but let's look at the other pitfalls intelligence has helped us to avoid. Due to the technologies and understanding of the natural world our intelligence has provided we are much less likely to die of disease or hunger. Amazingly the political systems we have developed (a kind of technology if you think about it) have resulted in most modern countries becoming mutually dependent as a result we have little desire to wage war on our peaceful neighbors. There are exceptions to this example of course, but they serve to prove the rule. We've tried to deposit knowledge for future generations in libraries and repositories around the world as a safeguard against future disasters. There are more of us, and we are living longer, more fulfilled lives as a whole than ever before. We have even begun, in fits and starts, the process of expanding humanity beyond one fragile basket.
In the future what other threats will we be able to stop or mitigate due to our growing technology? Comet and asteroid impacts? Probably. What else? Threats we've never even thought of are mostly likely lurking out there, both from ourselves and our creations, and from unknown sources. I contend that our intelligence and the technologies we create with it may help to create some of those threats, but they have also (here it is again) "so far" provided the forethought to stop the worst case scenarios.
As a whole, I'd take the direction we've moved in over some sense of new age pastoralism any day. We are far better off and more prepared for the challenges presented on this planet and in the universe at large equiped with our "dangerous" tools and intelligence than without them.
So, where does this leave any other potential civs out there? In the same basket is my bet. Of course, all of this is based upon speculation from examining a population of one known technological intelligence which really might not tell us much about anything "alien" out there at all. I'm sure there could be more psychotic or more peaceful technological intelligences than homo sapiens crawling around out there somewhere. Even "smarter" ones with little gift for forethought or plain-old dummies who just blow it altogether. But the end result is that technological intelligences should avoid far more pitfalls (on average) than they create and fall into.
Oh and IAAMOAC! Bonus points to anyone who knows what that one means!