Sunday, 3 October 2010

Uncertain how to model notelist

There are few phenomena that I am not sure this theoretical AI can model emergently (as a natural result of circumstances, instead of hardcoding explicit rules)

AI food/resource chain
Sex between AIs

Fear of change is still not accurately modeled?

How to solve ultimatum game

All basic emotions correctly modeled? Surprise and Expectation?
All basic motivator correctly modeled?

Anxious / tension - impatience
suspicious
envy (social comparison -> aggression, predator behavior)
perfectionism
attitude to rules and order (don’t harm others, don’t disobey, etc)
Diplomatic Hint
Individuality (Difference than the rest, filling a niche)

responsibility (no vandalism for example) -> everything has consequences
how to model death as a loss you CANNOT recover
for economics simulation – loaning money and resources? How to simulate trading of resources, debt

ambushes and traps
befriending somebody just to gain opportunity to betray later
helping somebody ONLY if he is more useful, and leaving him otherwise

democracy where everyone has a say in the decision VS hierarchy of command
imagination and creativity and desire to show off what you made
pride / self esteem VS shame / blame / guilt / regret  - (with or without others opinion)
how to model sacrifice, taking the blame
taking revenge

How to model social comparison (men watch other men fight, etc, and compare to them)
reputation + interest in good reputation
jealousy, bribe

compassion / guarding and helping each other / gratitude
trade / good exchange
theft / lie / betrayal / cheat / fear
capture, enslave, exploit, power and domination and obedience, forbidden acts
drugs / escapism / hedonism / materialism

a secret (information as tradable resource)

hide and seek / chase / rescue / escape / excitement

AI-created:
economy
society
tools of work (technology)
Protecting and escorting friends
Patrol paths

different species
different tribes in the same species
aggression within the group
non-lethal aggression, scaring, bluffing, territoriality
cannibalism

growing and speed of growth
aging and max age

How to model attention – the more the people , the less the importance of individual human
(cause resources are limited)

How to model perfectionism

AT FATAL OBSTACLE - denial - anger - bargaining – acceptance

Theoretical abstract animal AI game model


- - > this symbol means behavioral change from low to high variable value

  • “Happiness” variable:
Sadness (loss) - - > Joy (gain)
- change is based on result of actions, not initial intention. Based on that change - weakens/reinforces costs for behaviors
//models value of feedback, the “intent != result” principle
//models confidence, over & underestimation of the self

Affected by:
Pain
Loneliness
Boredom
etc

  • “Weight of latest events for changing costs” variable
Conservative (less affected by change) - - > liberal (too much biased towards new data)
//models adaptation to environment changes
//models sensing (gaining new data) VS intuition & knowledge (old data)

///

  • “Intellect” variable:
Emotional heuristics - - > Rational calculation, thinking in perspective
Fast with errors decision making - - > slow but thorough decision making
//models short term pain, long term gain situations

  • “Autonomy” variable
Obey with no responsibility - - > Free with much responsibility - - > Power, take freedom from others to self
//Models responsibility
//combined with bad attitude to others models power / dominance seeking

///

  • “Value of other people” variable for each known person:
Person has negative value (applies parasitism, aggression, enemy) - - > Person has zero value (egoism) - - > Person has positive value, but still less than 1 (all values are normalized, relative to mine) - - > Person has 1 and is equal to me (cooperation, morality) - - > Person is more valuable than self (martyrdom)
//models cooperation and morality, variable may be constant zero for non-living entities

  • “Danger / Benefit” evaluation:
Evaluation_for_action = My_Benefit – My_Cost +
            Sum_of_the_below_for_everybody_affected:
Value_of_that_person * (His_Benefit – His_Cost)  + (Expected_Help_from_this_person - Expected_Danger_from_this_person )

//The last line is basically a part of My_Cost, but I put it there to make it explicit                     
The action with best evaluation is chosen, possible actions:
Run away (fear)
Wait and do nothing (lazy, apathetic)
Help somebody
Attack Somebody
Self Defense
Call for help, attract attention
Trade
Lie / Cheat
Achieve some other goal, progress, resource

//models drive to achieve goals, to progress, to take initiative (fear, laziness, etc)
//takes into account that actions can be mutually beneficial (symbiotic) or mutually exclusive (parasitic) regardless of number of participants

  • “Default Cost for unknown inanimate entity” variable
Low (avoid unknowns) - - > high (exploration, curiosity)
//Models drive to understand the world

  • “Default Cost of unknown person” variable
Low (avoid unknowns) - - > high (social, charismatic)
//Models drive to socialize

//both of the 2 vars above are forms of learning, and that is form of self improvement.
//Variables are used instead of constants to model the risks in that learning (attitude towards the unknown)

///

  • Physical variables:
Health
Life Span
Speed
Damage
Cost (affects numbers)

Potential Problems:

Rational Definition of Morality


THE BEHAVIORAL EQUATION:

Specific_Action_Output (specific_person) = Benefit_Bias(specific_person) * Objective_Benefit (specific_person) - Cost_Bias(specific_person) * Objective_Cost (specific_person)

// Objective_Benefit and Objective_Cost take into account factors like – probability, necessity, affordability, dangers, time, energy and resources, etc. Our brain is relatively bad at computing this, but I will skip this topic here – it is covered in the “intellect VS emotions” article.

// Benefit_Bias and Cost_Bias take only personal preference into account. Note that we also have biases for ALL factors considered in Objective_Benefit and Objective_Cost. I simplified this, as it is not useful for the purposes of this article

General_Action_Output = SUM of Specific_Action_Output-s for EVERYBODY involved (self included)

Example:
General_Output_for_Loan_You_Money =
[Benefit_Bias(me) * Objective_Benefit (me) - Cost_Bias(me) * Objective_Cost (me) ] + [Benefit_Bias(you) * Objective_Benefit (you) - Cost_Bias(you) * Objective_Cost (you)]

We take the action with greatest General_Action_Output and avoid actions with negative General_Action_Output. Decision making is optimization process for ALL parties affected by the decision.

Lazy Example:
Benefit_Bias(self) = 1
Cost_Bias(self) = 5
I do something only if it costs me very little

Greed Example:
Benefit_Bias(self) = 5
Cost_Bias(self) = 1
I do anything that I gain from, regardless the costs

Egoism Example:
Benefit_Bias(self) > Benefit_Bias(ally)
Cost_Bias(self) > Cost_Bias(ally)
I do actions that benefit me more that you, and disregard the fact that they might be more harmfull for you than for me

Martyrdom Example:
Benefit_Bias(self) << Benefit_Bias(ally)
Cost_Bias(self) << Cost_Bias(ally)
I ignore the value of the self

All four examples of the above are considered generally harmful.

DEFINITIONS:

Definition of Rational Morality:
Benefit_Bias(self)
Benefit_Bias(other human)
Cost_Bias(self)
Cost_Bias(other human)
The above parameters have assigned values such that ensure maximum increase in gene pool entropy - the optimal balance between quantity (gene carriers count) and quality (diversity of abilities)

Rational morality values are those that (considering circumstances) give the whole species (not just the individual) greatest chance of survival and improvement.

It is generally extremely difficult to calculate those. You can write some simplified simulation that bruteforces combinations (for example a genetic algorithm), but those results are not definitive.

We know however that:
1)      the four examples above (lazy, greed, egoism, martyrdom) are NOT optimal, game theory suggests many situations that demonstrate their inefficiency … meaning math actually proved that egoism is harmful (ain’t that cool, economists)
2)      nature can give us a decent approximation

Morality as defined by our emotions (“equality” condition):
Benefit_Bias(self) = Benefit_Bias(other human)
Cost_Bias(self) = Cost_Bias(other human)
I treat others as equals. Lies, murder, theft, etc. are bad, justice, help and communication are good. There is an evolutionary benefit for us when we feel this way. Such behavior promotes mutual help and cooperation, which results in exponential entropic growth.

Problem 1:
Values change with time.
The “equality” condition is not always optimal – initially mutually unknown organisms are in some sort of “prisoners’ dilemma” / “ultimatum game”, but that is easily solvable with any kind of assurance game. It happens in nature and in human relations all the time. So the problem of “how to reach the condition” is easily solvable.

Problem 2:
Values change from person to person
The “equality” condition is usable, but still an approximation to the rational morality values. People are not really equal. Some are quick thinkers, some are deep thinkers, some are workaholics and achievers, some are researchers. Diversity is important as in different conditions most valuable skills vary. So objective value of people is a function of circumstances, too. That needs to be taken into account. If an aggressor invades your country, who will be more needed – sharpshooters or pianists?

OTHER SMALL PROBLEMS AND EXAMPLES:

Why we need rational morality:
Imagine you are a European private company. You have plans for a revolutionary new tech, but you are still in development stage and that is expensive. So you outsource to China - human labor is extremely cheap there, people work overtime non-stop, no syndicates or human rights bullshit, if somebody starts complaining or works less, you go to the communist party leader there and the next day the problematic workers are replaced. China doesn’t give that for free, though. For every factory you build there, you must build another for the government, as a gift. You give them your know-how and your research for their own personal use within their own market. The rule is you do not sell within China, they do not compete with you outside it. The actual financial benefit is tenfold the risks, so you take the deal. The Chinese government then builds another ten factories like yours, their market is flooded with your cloned and underpriced product, but you are still ahead from your actual competitors, so all is well. You prepare your marketing and are about to go public in Europe and America the next year. Then some tourists visiting China buy your renamed tech, return to their home places and start a rumor “Hey, look this awesome thing I bought. And it was really cheap. Why Chinese got such good stuff and we don’t. Maybe Communism rules!”. Is there a moral problem here? What is it? How to solve it? You got so many parties involved – different markets of consumers, workers who lost their jobs when you outsourced, Chinese labor, Communist party, competitors, yourself. We are used to binary “righteous VS evil” kind of emotional thinking but that doesn’t work in such complex situations. You need morality described mathematically.

Self-sacrifice is possible with “equality” condition:
Imagine you must choose between:
-         risking your life to save 3 friends
-         running away and saving yourself, but the friends burn
3 deaths are much more costly than 1 death.

Morality is beneficial even post-mortem:
Imagine that the example above is not about a friend, but about your own children. Dying for them allows 3 times more of you own genes to live. Biologically ALL humans are very distant relatives, so helping them partially helps yourself. For more research on this, search for “animal altruism” phenomenon.

Morality is not exclusively about humans:
The “relatives” idea above can be extended to animals as well. Most organisms on Earth have stunning similarities in genetic code. We empathize with animals proportionate to how close relatives we are - we like cats and dogs and chimps, less fish and lizards, even less insects and we even do not consider plants alive. Morality based only on genetic similarities is not optimal, it is just a single factor worth noting. More important - rational morality should take actual mutual benefit into consideration (not only humans).

Dictatorship in not optimal:
Autonomy has value, so taking someone’s freedom costs him. That cost is a part of Objective_Cost (slave), even if the ruler is just and strives for the benefit of all (monarchy, communism, etc.)

Chain of cooperation:
With rational morality parties are usually more than two, so it is almost never “I help you, you help me back” situation, it is more like “A helps B helps C helps D helps A”. So morality benefits are not immediately obvious. I take care of my children, so they can take care of theirs so … I pay taxes, so government finances education, so people study medicine, so they become doctors so I live longer. Of course you have to make sure the person you help does not stop the chain (already discussed in Problem 1)

Rational Morality also allows the following definition of enemy:
Benefit_Bias(Enemy) is negative
Cost_Bias(Enemy) is negative
Enemy – concluded through the assurance game to be somebody who is not rationally moral and as such is harmful for my group

Conclusion:
As defined in this article rational morality is synonymous with cooperation and symbiosis. It is evolutionary stable strategy, an optimal Nash equilibrium. That makes it somewhat important for our progress.

Intellect VS Emotions


Intellect/Rationality:
-         goal is objectivity and accuracy and precision in judgment
-         constant learning (gain new knowledge)
-         use experience (knowledge and past events) to optimize result
-         long term perspective thinking, not only the immediate future
-         complex learning and pattern searching brain engine
-         complexity increases accuracy and decreases speed - reasoning is relatively slow process
-         sometimes those complexity and speed are costly - wasting valuable time over-analyzing and missing an opportunity
-         relatively new product of evolution, far from optimized yet – mistakes that find false patterns (conspiracy theories, prejudices, illusions, speculations, superstitions) or no patterns at all (noise)
-         useful for complicated decisions

Emotions:
-         mathematical heuristics (APPROXIMATIONS) that solve real life problems
-         encoded on genetic level, still influenced by learning new knowledge, but less that rationality
-         simpler chemical algorithms (look link again)
-         approximated solutions so relatively low accuracy of judgment
-         extremely fast, so emotions are perfect for simple circumstances
-         speed and simplicity make emotions useful – morality is a value that is incredibly more difficult to model with rationality (wrote a separate article on this) than with simple emotions, also reactions to immediate dangers need to be fast
-         relatively old product of evolution – optimized for problems that are already out of date - even lower accuracy of judgment. Example – we LOVE sweet, because benefit of sugar is tenfold the harm in environment with little of it, but now obesity is the leading killer in any modern country with abundant sugar. That low accuracy I am talking about causes bad decisions that actually kill people.
-         useful for simple decisions

Conclusion:
Both are valuable and have uses so decision making is always a combination of the two. That combination is often in favor of a wrong approach, and that is harmful too.

Examples:
  • People suck at probability estimation
  • People suck at value estimation (we compare to things that don’t matter)
  • We compare ourselves to others to determine how happy we are

  • Source Amnesia phenomenon
  • “But wait! There’s more!” advertizing tricks where a sum of small benefits is felt greater than the whole cost $5 + $5 +5 +$5 > $20
  • Anchoring phenomenon
  • People are risk averse - loss is twice more powerful than gain
  • People rationalize "If I spent 3 years playing WoW, this must be a good game"
  • People believe in what is more beneficial for them, in what they like most (god, reincarnation, immortality), rather that in what has the greatest chances of being true.
  • What we think generally differs from objective reality:
  • http://en.wikipedia.org/wiki/List_of_cognitive_biases

  • How we optimize gain:
- now is better that later
- more is better than less
Problem: these two rules often conflict in a way we are unable to solve effectively the “pain now, gain later” problem

Other references:

Definitions


Life:
  • Result of the rules of the inanimate world (dead, indifferent)
  • Von Neumann self-optimizing and result-optimizing calculating machines – necessary as the world constantly changes and poses new challenges (problems) that emerge from its rules. By solving them, life survives, adapts and improves, so it can continue to live. No reason to do that if there are no problems J
  • Life has grown to great complexity - we do not know how we think, but we also do not know the vast complexity of organisms (1/3 known) and their importance and place in ecosystems
  • The meaning of life is to continue

Free will:
Human mind is entirely physical but so complex and granular, (considers many parameters, slightly different from human to human) that we DO NOT understand how it works and  so we attribute that decision making chemical process to an abstract idea called “free will”

Culture:
The combined product of learning algorithms and environment.

Mathematics:
Finds relations (patterns) between entities, but not necessarily:
  1. explaining why those relations exist
  2. searching in the real world

Science:
The combined knowledge of humans

Communication – mutual exchange of ideas in search of synergy
Morality – mutual exchange of support in search of synergy
Trade – mutual exchange of resources in search of synergy
Sex – mutual exchange of genes in search of synergy
Cooperation – mutual exchange of something in search of synergy

Evolution:
  • Endless sequence of challenges, failure is death, success is sex + slight genetic change (usually NOT beneficial)
  • Dangerous and harmful events can be beneficial - killing weak genes, allowing multiplication of those successful enough to survive
  • Evolution is not deliberate, it doesn’t see the future. It is a simple aimless bruteforce. Maybe this is one of the reasons genetic algorithms kinda suck for computing concrete problems?
  • Evolution is relatively slow, compared to organisms’ life span
  • Evolution is a continuous process that can be represented mathematically. It is a product of physical laws and chemical reactions. It is not an entity, not guided by some mysterious power and not sentient.
  • Pyramid of evolutionary dependencies - the simplest organisms are the supports, the complex achievements are at the top. Kill the supports and the structure will collapse, kill the top and the structure will just continue to grow.
  • Evolution is a great sum of events, there is never a single reason for a result, so no explanation of a result is entirely complete
  • Evolution does not improve single organisms but the whole gene pool. You, the individual, are not important, just a gene carrier. If your death is beneficial for the group, you will die – animal altruism phenomenon

Toy:
A game without specified goal, useful as an exploratory tool instead.

Evil:
- egoistic / amoral
OR
- does not / cannot think - stupid
OR
- not initiative, passive, lazy

Society:
Peacefully dealing with people you cannot stand :D

Transcendence


What is the definition of “Transcendent”?

I was told:
“Something no person understands rationally, so there is no point in trying” … like:
  • human soul
  • God
  • Eternity
  • Love

Let’s travel back in time to the ancient Egypt and look at the worshipers of the God of Sun RA. Let’s ask those people about:
  • their god, the burning gas giant in the sky
  • DNA structure
  • Complex numbers
They lack the tools to observe and background to understand those things (as knowledge is difficult to consume at one bite so careful explanation will not do) … so they are transcendent to them … and even more so to out oldest mammal ancestors … but not to us.

Let’s assume that X is ANY idea without rational explanation. Experience shows that X can gain rational explanation with time (brain evolution, curiosity, new ideas, new discoveries, new observation tools – that happens all the time in science – old theorems collapse, new theorems emerge, complex theories are immensely simplified by new knowledge) … so now our definition of Transcendence is not applicable to X. And we do NOT want to put Human Soul and complex numbers in one group, it is too cruel, right? So we need a better definition:

Transcendence = “Something no person understands rationally and will ever be able to understand rationally, no matter the tools, knowledge or evolution provided … so there is no point in trying”

Now we include the future too, and the definition is rock solid.
And it is possible for such entities to exist.

Now that you know what transcendent means, how do you find the transcendent ideas? How do you know that something will never ever be understood by evolutionary or scientific progress of our species in a billion years?
  • Today you cannot see into the future and say that.
  • Judging by us is also not an option as the perceptual limitations of the next generations are different than ours.
  • You cannot use probability. 0 entities turned out to be truly transcendent by definition in out recorded history, more than 0 turned out not to be transcendent. So it is more probable something to NOT be transcendent.

Conclusion:
Every time you claim something is transcendent, you just put a “there is no point in trying to study here” sign with no way for you to know that. This is called a lie.

Using the word “Transcendent” is a lie. 

What indies can do in order to survive


Maintain ownership

Connect with individual customers

Remain emotionally involved

Multiplayer for longer life

Online distribution

Frequent free expansions & improvements

Procedural content

User-generated content

Small budget to keep the risk low, so the game can explore themes freely and be competitive through innovation

More games for lower price