Friday, September 16, 2011

Ethnic cleansing and exobiology


As a starting point, it is helpful to think of ETI as trying to maximize some sort of value function.2 Specifically, they are trying to maximize intrinsic value, which is something that is valuable for its own sake.  Intrinsic value contrasts with extrinsic value, in particular instrumental value, which is valuable because it causes additional value.  One can place intrinsic value on many different things, such as life, ecosystems, happiness, knowledge, or beauty. Human ethics is often anthropocentric in the sense that it places intrinsic value only on human phenomena, such as human life, human happiness, or other human factors.  Such anthropocentrism is selfish on a civilizational scale because it involves humans only placing intrinsic value on the interests of their own civilization.
 
We see two types of scenarios in which ETI might intentionally harm us.  The first scenario involves hostile, selfish ETI that attack us so as to maximize their own success.  This scenario suggests a standard fight-to-win conflict: a war of the worlds.  The second scenario involves ETI that are in no way selfish but instead follow some sort of universalist ethical framework.  ETI might attack us not out of selfishness but instead out of a universalist desire to make the galaxy a better place.
 
Just because an ETI civilization holds universalist ethics does not mean that it would never seek our harm.  This is because ETI may be quite different from us and could conclude that harming us would help maximize whatever they value intrinsically [34]. For example, if ETI place intrinsic value on lives, then perhaps they could bring about more lives by destroying us and using our resources more efficiently for other lives.  Other forms of intrinsic value may cause universalist ETI to seek our harm or destruction as long as more value is produced without us than with us.  Novelist Douglas Adams captures this scenario vividly in The Hitchhiker’s Guide to the Galaxy, where ETI place intrinsic value on civic infrastructure (or, more likely, on some consequence of its use) and destroy Earth to make way for a hyperspace bypass.  At the heart of these scenarios is the possibility that intrinsic value may be more efficiently produced in our absence.
 
An interesting and important case of universalist ethics in this context is when civilization itself holds intrinsic value.  ETI that support this ethical framework would seek to maximize the total number of civilizations, the diversity of civilizations, or some other property of civilizations.  All else equal, such ETI would specifically wish for our civilization to remain intact.  But all else may not be equal.  It is plausible that such ETI might try to harm or even destroy us in order to maximize the number/diversity/etc. of civilizations.  This could occur if our resources could be used to more efficiently to generate or retain other civilizations, though this possibility seems highly remote given how efficiently tuned humanity is to its environment.  Alternatively, such ETI could seek our harm if they believe that we are a threat to other civilizations...if ETI doubt that our course can be changed, then they may seek to preemptively destroy our civilization in order to protect other civilizations from us.  A preemptive strike would be particularly likely in the early phases of our expansion because a civilization may become increasingly difficult to destroy as it continues to expand.
 
Another recommendation is that humanity should avoid giving off the appearance of being a rapidly expansive civilization.  If an ETI perceives humanity as such, then it may be inclined to attempt a preemptive strike against us so as to prevent us from growing into a threat to the ETI or others in the galaxy.  Similarly, ecosystem-valuing universalist ETI may observe humanity’s ecological destructive tendencies and wipe humanity out in order to preserve the Earth system as a whole.


This “scenario analysis” was greeted with much derision when it became public. However, I’d like to examine the analysis from a different angle.

You have infidels like Richard Dawkins and Christopher Hitchens who inveigh against the “genocidal” passages of the OT. Loaded words like “ethnic cleansing” are also used. You also have wolfish sheep like Thom Stark and Randal Rauser who copycat the same objections. They assure us that the “genocidal” passages conflict with our fundamental moral intuitions.

However, the “scenario analysis” I just quoted was written by card-carrying secular scientists. They share the same worldview as Hitchens, Dawkins, et al.

And they’re discussing human rights from an exobiological standpoint. Hostile aliens have no moral compunction about committing “genocide” or “ethnic cleansing” against the human species. They don’t share our moral intuition about human rights. And that’s because they aren’t human. Our moral intuitions are anthropocentric or speciesistic. Human rights is anthropocentric or speciesistic.

Put another way, a superior alien species has a godlike view of lower animals like human beings. They view us the way we view chickens.

On this analogy, the Israelites were simply making Canaan a better place. It was a preemptive strike to protect the Israel from her enemies. Moreover, the Israelites were making more ecofriendly use of the natural resources, viz. crop rotation. 

No comments:

Post a Comment