Tuesday, September 21, 2010

The Myth of Enlightened Self Interest, Part Three

In Part One I formulated the hypothesis of self interest, and in Part Two I showed how it can be mathematically disproved.  As expected, this series has generated a bit of discussion, and I am glad to say that has been almost all constructive.

To add another disclaimer, I am not into disproving self interest because I advocate a particular political or ideological position.  If I have an agenda, hopefully it comes out as the "agenda of being adaptable."  I consider political parties to be little more than sports teams and look at positions based on ethical merits--and as political platforms shift, so does my ability to stay aligned with them.  My own views also shift, but only as new information becomes available--thus I am human, and consistent in my inconsistency, but adaptive in nature.

Now, to return to the subject at hand, which is that of what to make out of the disproved hypothesis.  As human beings we tend to find that the conclusions generated by game theory don't sit quite right with us.  When I first encountered it, I have to admit that I had a hard time accepting that the solution strategy to the Prisoner's dilemma was the "correct" one mathematically--and yet, there it is, unequivocally so.  The Tragedy of the Commons is a very real phenomenon despite what I would like to believe.  To paraphrase Al Gore, it is an "inconvenient truth," only in our case a mathematical one.

Let's take a look at another important game called the Centipede game.  By analysis, the "correct" solution is to defect on the first turn.  And yet, as humans we recognize that our payoff is much higher if we wait a number of rounds.  It is as if the hypothesis of self interest is telling us that it flies in the face of our own (human) common sense!  As humans, we "recognize" that something is missing--a "lack of information," as my friend Taliver Heath put it.  And yet:
  • The system is mathematically sound and contains no contradictions.
  • It assumes perfect information by all players.
  • It incorporates strategies, not just short term responses to goals.
The mathematician Kurt Gödel once famously formulated that any mathematical system must either contain contradictions (which ours does not) or rely on information that must exist outside of the system in order for its definition.

This "lost information," in our case, is that bridge between the sheer mathematical element and what our human responses are.  Once we recognize a Prisoner's dilemma, for instance, our human inclination is to want to fix it.  Unfortunately, the language of single payoff game theory does not allow us to do that.

What we are really itching to do requires us to value something differently.  For instance, the Prisoner's dilemma does not have any language to incorporate "best result for both players" as an outcome.  The Centipede game does not have the language that gives us the patience or hope to wait for a bigger payoff later--because it might not come.  The Tragedy of the Commons does not have the language to allow "long term survival" as a different, achievable goal.  In order for us to value something differently, what we need to add to the language is multiple payoffs.

The idea of multiple payoffs isn't a new one.  It was first examined by Blackwell in 1956 and Contini in 1966.  In a May, 1974 paper by M. Zeleny, the language of vector payoffs (including randomness) was described in terms of matrices and the hypothetical optimal solution (lambda) was shown that it could be achieved through "linear multiobjective programming."  In that same paper, M. Zeleny showed how cooperation between players can, in fact, achieve better utilitarian results than traditional competitive game theory strategies (though, of course, this depends on the nature of the game).

The bridge between multiple payoffs and the "human element" is explored thoroughly in a July, 2000 paper [PDF] by Sudipta Sarangi.  The conclusions show that there is an implicit lack of predictive power and that "experimentation" is required to produce more optimal games.

If I am going anywhere in this series of posts, it is to show that self interest alone isn't enough for an ethical ideology.  In fact, it is in many cases either counterproductive (as in the Centipede game) or self destructive (as in the Prisoner's dilemma).  Any practical set of human ethics requires an adaptive set of both competitive and cooperative strategies, an awareness of information and a capacity to learn.

9 comments:

  1. Well what I really think you've done is show that gaming a complex model with the simple systems isn't easy...which we all knew already!

    ReplyDelete
  2. Is there a debate that enlightened self-interest results in the perfect use of anything? I think it's only preferred when compared against the other options.

    In your first installment, you started discussing political ideologies, and this is why I keep returning to them as well, since it seems to be a point you're trying to make.

    Now, as a side note, humans are not rational actors, and this actually helps keep things fair. One famous game is simply the following.

    Two people play. Alice and Bob cannot see each other. Alice is given $100 to split between her and Bob. If Bob agrees, he accepts the split, and keeps the money. If he doesn't agree, both players get nothing.

    Now, Gane Theory says that the winning move is for Alice to give one penny to Bob and keep the rest. After all, Bob will come out ahead, and so will Alice. However, when done experimentally, people tend to choose fairly -- because when unfair splits are made, Bob will often veto, even though it hurts him.

    So, the presence of this irrational actor is what makes people often choose "fairly" even though game theoretical self interest would seem to imply it shouldn't exist.

    ReplyDelete
  3. Mordicai: I've done more than show that it is complex. I've quantified exactly *how* it is complex and the tools that can be used to use to model it. I was hoping to model some real world game systems, but don't have the bandwidth for the project at the moment.

    ReplyDelete
  4. Tal: "Is there a debate that enlightened self-interest results in the perfect use of anything?"

    Actually, yes. Many people, in fact, do use this as the basis for ethical systems. Hence why I started out with political ideologies as examples: assumptions based on a faulty premise.

    "I think it's only preferred when compared against the other options."

    I don't think we've exhausted all the other possible options. In fact, I think our development was cut short in this. The only other option I've seen as a counterexample is communism, which, as we've seen, is even more disastrous.

    In fact, I think that new options can and should be explored using multiple payoff models. The example you give is one such model and it fits better to human behavior: the invisible payoff of "fairness", which you can easily assign a value inversely proportional to the difference between the actual payoff and 50%. Your game suddenly looks quite different, with the optimal solution now falling precisely in the middle--exactly where human behavior says it should go.

    ReplyDelete
  5. Hmm, there was another post from Tal, but somehow it hasn't appeared here yet, or got deleted.

    I want to say, there is a difference between challenging the approach of using multiple variable payoffs and modeling them and questioning the wisdom of the payoff structure of a specific model. I believe Tal is doing the latter rather than the former, but I'll wait until his comment posts (if it does) to give a better idea--or Tal, maybe you can tell me?

    ReplyDelete
  6. Hmm, my last comment posted. Either Tal deleted what he was going to say or there are blogger gremlins at work. Well, I thought what he had to say was cool and continuing to contribute to the discussion. But I don't want to assume he didn't purposefully deleted it.

    There is a scenario where if you run the Alice and Bob game iteratively, Bob isn't going to turn down any amount of free money, but lack of fairness will build up and become resentment. I don't think the example conflicts with the model, since we still have two variables, money and fairness / resentment. My model might be a bit juvenile, since I haven't introduced any scaling factors, but I still think it works. What it requires is empirical data, Bayesian analysis, and correlation. Eventually you can frame hostility as an index in terms of the other variables in the model. You might say, "the probability of a class struggle is X, given N iterations of the model and the following correlated variables."

    This is no different than what we already do in social sciences. What the multiple payoff game model might give us that we don't already have is an accurate picture of what is going on, how it could be fixed, and what the tradeoffs are.

    ReplyDelete
  7. I believe a key feature of human experience is being left out of the discussion here: emotion. Humans are not rational actors, because if we were, we could apply Game Theory as you suggested and get closer to maximizing the lifetime of the Commons for all, I have no doubt.

    There are many factors that lead to us acting irrationally, but I believe *fear of scarcity* is one of the emotional states most likely to lead to the Tragedy of the Commons and to irrationality in Games.

    I am interested in the study of how to alleviate this problem. One approach I have been exploring looks at how interpersonal relationships can be changed to help people fear scarcity less through new communications methodologies.

    No amount of speculation on the right mix of regulation vs. free market is going to make our society function better until people *actually believe* it's better for all of us. But to believe that, everyone has to stop being paranoid about everyone else.

    I am working on a paper on this topic and will post a link when I have one.

    ReplyDelete
  8. That's what I get fr trying to write a large post from an iPhone, I guess. Hmm.

    Anyway, it is an interesting idea to work a larger level "unfairness" into the equation. Perhaps that would even be something measurable -- in a video game, for example.

    And, to traumentwerfer's comment -- that is human nature. I believe distrust of strangers is steeped in our genetics. There's some interesting work resulting from this as well. Norwegian countries have very large social safety nets. However, as the societies have gained larger immigrant populations, calls have begun to change the structure of how those work.

    And the Tragedy of the Commons isn't about fear of scarcity -- it is exactly due to a scarce resource.

    ReplyDelete
  9. Right, Dave, like Tal suggests: the Tragedy of the Commons is about actual scarcity. The actual solution Prisoner's dilemma is the "rational" one, even though we don't want it to be.

    But fear is indeed something that causes irrational behavior and that can easily lead to bad choices. For example, "Orange Alert" was raised every time John Kerry was ahead in the polls and stopped happening altogether after election 2004.

    The effect of fear is something that has been studied and could probably be shown in the hypothetical multiple payoff model. Now I've decided to name the model Gort and its hypothetical programmer Klaatu!

    ReplyDelete