Sunday 3 October 2010

Rational Definition of Morality


THE BEHAVIORAL EQUATION:

Specific_Action_Output (specific_person) = Benefit_Bias(specific_person) * Objective_Benefit (specific_person) - Cost_Bias(specific_person) * Objective_Cost (specific_person)

// Objective_Benefit and Objective_Cost take into account factors like – probability, necessity, affordability, dangers, time, energy and resources, etc. Our brain is relatively bad at computing this, but I will skip this topic here – it is covered in the “intellect VS emotions” article.

// Benefit_Bias and Cost_Bias take only personal preference into account. Note that we also have biases for ALL factors considered in Objective_Benefit and Objective_Cost. I simplified this, as it is not useful for the purposes of this article

General_Action_Output = SUM of Specific_Action_Output-s for EVERYBODY involved (self included)

Example:
General_Output_for_Loan_You_Money =
[Benefit_Bias(me) * Objective_Benefit (me) - Cost_Bias(me) * Objective_Cost (me) ] + [Benefit_Bias(you) * Objective_Benefit (you) - Cost_Bias(you) * Objective_Cost (you)]

We take the action with greatest General_Action_Output and avoid actions with negative General_Action_Output. Decision making is optimization process for ALL parties affected by the decision.

Lazy Example:
Benefit_Bias(self) = 1
Cost_Bias(self) = 5
I do something only if it costs me very little

Greed Example:
Benefit_Bias(self) = 5
Cost_Bias(self) = 1
I do anything that I gain from, regardless the costs

Egoism Example:
Benefit_Bias(self) > Benefit_Bias(ally)
Cost_Bias(self) > Cost_Bias(ally)
I do actions that benefit me more that you, and disregard the fact that they might be more harmfull for you than for me

Martyrdom Example:
Benefit_Bias(self) << Benefit_Bias(ally)
Cost_Bias(self) << Cost_Bias(ally)
I ignore the value of the self

All four examples of the above are considered generally harmful.

DEFINITIONS:

Definition of Rational Morality:
Benefit_Bias(self)
Benefit_Bias(other human)
Cost_Bias(self)
Cost_Bias(other human)
The above parameters have assigned values such that ensure maximum increase in gene pool entropy - the optimal balance between quantity (gene carriers count) and quality (diversity of abilities)

Rational morality values are those that (considering circumstances) give the whole species (not just the individual) greatest chance of survival and improvement.

It is generally extremely difficult to calculate those. You can write some simplified simulation that bruteforces combinations (for example a genetic algorithm), but those results are not definitive.

We know however that:
1)      the four examples above (lazy, greed, egoism, martyrdom) are NOT optimal, game theory suggests many situations that demonstrate their inefficiency … meaning math actually proved that egoism is harmful (ain’t that cool, economists)
2)      nature can give us a decent approximation

Morality as defined by our emotions (“equality” condition):
Benefit_Bias(self) = Benefit_Bias(other human)
Cost_Bias(self) = Cost_Bias(other human)
I treat others as equals. Lies, murder, theft, etc. are bad, justice, help and communication are good. There is an evolutionary benefit for us when we feel this way. Such behavior promotes mutual help and cooperation, which results in exponential entropic growth.

Problem 1:
Values change with time.
The “equality” condition is not always optimal – initially mutually unknown organisms are in some sort of “prisoners’ dilemma” / “ultimatum game”, but that is easily solvable with any kind of assurance game. It happens in nature and in human relations all the time. So the problem of “how to reach the condition” is easily solvable.

Problem 2:
Values change from person to person
The “equality” condition is usable, but still an approximation to the rational morality values. People are not really equal. Some are quick thinkers, some are deep thinkers, some are workaholics and achievers, some are researchers. Diversity is important as in different conditions most valuable skills vary. So objective value of people is a function of circumstances, too. That needs to be taken into account. If an aggressor invades your country, who will be more needed – sharpshooters or pianists?

OTHER SMALL PROBLEMS AND EXAMPLES:

Why we need rational morality:
Imagine you are a European private company. You have plans for a revolutionary new tech, but you are still in development stage and that is expensive. So you outsource to China - human labor is extremely cheap there, people work overtime non-stop, no syndicates or human rights bullshit, if somebody starts complaining or works less, you go to the communist party leader there and the next day the problematic workers are replaced. China doesn’t give that for free, though. For every factory you build there, you must build another for the government, as a gift. You give them your know-how and your research for their own personal use within their own market. The rule is you do not sell within China, they do not compete with you outside it. The actual financial benefit is tenfold the risks, so you take the deal. The Chinese government then builds another ten factories like yours, their market is flooded with your cloned and underpriced product, but you are still ahead from your actual competitors, so all is well. You prepare your marketing and are about to go public in Europe and America the next year. Then some tourists visiting China buy your renamed tech, return to their home places and start a rumor “Hey, look this awesome thing I bought. And it was really cheap. Why Chinese got such good stuff and we don’t. Maybe Communism rules!”. Is there a moral problem here? What is it? How to solve it? You got so many parties involved – different markets of consumers, workers who lost their jobs when you outsourced, Chinese labor, Communist party, competitors, yourself. We are used to binary “righteous VS evil” kind of emotional thinking but that doesn’t work in such complex situations. You need morality described mathematically.

Self-sacrifice is possible with “equality” condition:
Imagine you must choose between:
-         risking your life to save 3 friends
-         running away and saving yourself, but the friends burn
3 deaths are much more costly than 1 death.

Morality is beneficial even post-mortem:
Imagine that the example above is not about a friend, but about your own children. Dying for them allows 3 times more of you own genes to live. Biologically ALL humans are very distant relatives, so helping them partially helps yourself. For more research on this, search for “animal altruism” phenomenon.

Morality is not exclusively about humans:
The “relatives” idea above can be extended to animals as well. Most organisms on Earth have stunning similarities in genetic code. We empathize with animals proportionate to how close relatives we are - we like cats and dogs and chimps, less fish and lizards, even less insects and we even do not consider plants alive. Morality based only on genetic similarities is not optimal, it is just a single factor worth noting. More important - rational morality should take actual mutual benefit into consideration (not only humans).

Dictatorship in not optimal:
Autonomy has value, so taking someone’s freedom costs him. That cost is a part of Objective_Cost (slave), even if the ruler is just and strives for the benefit of all (monarchy, communism, etc.)

Chain of cooperation:
With rational morality parties are usually more than two, so it is almost never “I help you, you help me back” situation, it is more like “A helps B helps C helps D helps A”. So morality benefits are not immediately obvious. I take care of my children, so they can take care of theirs so … I pay taxes, so government finances education, so people study medicine, so they become doctors so I live longer. Of course you have to make sure the person you help does not stop the chain (already discussed in Problem 1)

Rational Morality also allows the following definition of enemy:
Benefit_Bias(Enemy) is negative
Cost_Bias(Enemy) is negative
Enemy – concluded through the assurance game to be somebody who is not rationally moral and as such is harmful for my group

Conclusion:
As defined in this article rational morality is synonymous with cooperation and symbiosis. It is evolutionary stable strategy, an optimal Nash equilibrium. That makes it somewhat important for our progress.