Jump to content

User:Volunteer Marek/gt

From Wikipedia, the free encyclopedia

Game theoretic models of Wikipedia behavior.

Analysis of dispute and conflict on Wikipedia.

Definitions

[edit]
  • "Dispute" - simply a disagreement between two editors. A dispute can be resolved through compromise and discussion or it can be resolved through "conflict".
  • "Conflict" - a particular way of settling a dispute. Generally involves one or more of the following: edit warring, personal attacks, derailment of talk page discussions, misrepresentation of sources, block shopping for one's content opponents.

Topic areas with persistent and multiple conflicts become "battlegrounds".

Generally we consider topic areas where disputes are settled through compromise to be unproblematic; hence the focus here is on the problematic topic areas (battlegrounds) and to what extent, and how, they can be turned into "compromise-grounds".

Strategies (in the game theoretic sense, i.e. "actions"):

{C} - compromise

{F} - fight

Outcomes:

{C,C} - compromise outcome

{C,F} and {F,C} - one party "wins" the dispute and the other (compromising) party gets screwed but outright conflict is avoided

{F,F} - both editors fight each other

Payoff mappings (generic, adjusted as needed below)

{C,C}->(a,a)

{C,F}->(b,c)

{F,C}->(c,b)

{F,F}->(d,d)

Explanation of payoffs: "a" is the payoff a person gets when the result is compromise. "d" is the payoff a person gets when the result is an outright conflict between the two editors. "b" is the payoff a person gets when they "lose" in the sense that they chose to "compromise" or accommodate the other player, who then chose to fight and insisted on their implementing their POV. "c" is the opposite case, it is the payoff a person gets when they "win" in the sense that they chose to "fight" and push their POV when the other player relented (by choosing to "compromise").

Player types

Type 1's have the following preference ordering:

They prefer "compromise" to "winning" (a>c) but don't want to be taken advantage of (d>b) (ordering between c and d and a and b doesn't matter in the static game, but it does matter in the evolutionary version of the game)

Type 2's have the following preference ordering

They prefer "winning" to "compromise (c>a) and of course they also prefer "conflict" to being taken advantage of (d>b)

Static

[edit]

General assumptions on parameters

[edit]

d>b

This is just the "nobody likes to get screwed" assumption

a>d

This one is conditional in the sense that even Type 2's prefer {C,C} to {F,F}. Of course there may be (are) individuals who prefer d>a (i.e. ones who enjoy conflict for conflict's own sake). The question then becomes of the appropriate welfare measure; should "purely selfish" preferences be given same weight in the Social welfare function as "Enlightened self-interest preferences". This is a normative rather than a positive (ugh, another crap economics article) issue. It has implications for appropriate policies but not for analysis of the behavior itself.

2 types, 3 different possibilities

[edit]

The resulting Nash equilibria are shaded in purple in the payoff matrices below.

Case 1 - Two Type 1 players

Preferences for both players: a>c, that is both editors put compromise (outcome a) over getting their own way at all cost (outcome c)

C F
C a,a b,c
F c,b d,d
Fig. 1

Standard multi equilibrium coordination game. Since a>d, {C,C} is a focal equilibrium and requires little effort - for example simply announcing one's type per "let's meet in Paris". WP:AGF is a policy/institutional feature which is meant to enable coordination on the "good equilibrium".

Case 2 - Two Type 2 players

Preferences for both players: c>a implies c>a>d>b, that is both editors care more about winning disputes than achieving a (possibly imperfect) compromise solution. Note that this is different than the more extreme case of editors who not only want to "win at all cost" but also prefer conflicts for their own sake. Hence, this is a "weak" assumption.

C F
C a,a b,c
F c,b d,d
Fig. 2

Note that because a>d, both Type 2's would actually prefer outcome {C,C} to outcome {F,F}. This is the classic Prisoner's dilemma.

Case 3 - the mixed case, one Type 1 and one Type 2 player

C F
C a,a b,c
F c,b d,d
Fig. 3

Preferences for Type 1: a>c and d>b - this is an editor who puts compromise over getting their own way but at the same time does not want to get taken advantage of.

Preferences for Type 2: c>a (c>a>d>b) - as in Case 2 this is an editor who wants to "win at all cost"

Note that because a>d, even Type 2 would actually prefer outcome {C,C} to outcome {F,F}. Also a Prisoner's dilemma.

Implications

[edit]
  • Case 2 and case 3 are indistinguishable to an outside observer. But in case 3 Type 1 would like to C but because she knows Type 2 will choose F she chooses F as well (since d>b, i.e. the "nobody likes to get screwed" assumption). In case 2 neither players wants to C anyway.
  • Type 1's will play C only if they believe the other player to also be a Type 1. But that's a necessary not a sufficient condition. Necessary conditions (assuming, wlog, Player 1 is Type 1) in terms of epistemic logic are:
    • Player 1 believes that Player 2 is Type 1
    • Player 1 believes that Player 2 believes that Player 1 is Type 1
    • Player 1 believes that Player 2 believes that Player 1 believes that Player 2 is Type 1
    • Like Common knowledge (logic) (actually like Common belief (logic) which needs an article) but I think the iterations stop here.

Important thing to note is how stringent are the conditions for even Type 1's to act as in Case 1 rather than Case 3. And even then they still have to coordinate on the focal point!

Policy (administrator) intervention

[edit]

There are several possible ways that administrator intervention, in the form of blocks, bans or other sanctions, can hope to alter the situation and potentially improve upon a "no intervention" outcome. At the same time, looking at the actual analysis makes it immediately obvious that a naive belief that "any kind of intervention is better than no intervention" can do a lot more harm than good. There are two basic ways we can model policy intervention here: (1) hoping to change the proportion of "good faith-ed" editors vis-a-vis the "bad faith-ed" editors by removing apparently troublesome editors from the topic area or (2) hoping to change the actual pay offs to compromise vs. conflict, thus altering the incentives to engage in each kind of actions. Unfortunately, both kinds of interventions are constrained by the fact that Cases 2 and 3 are indistinguishable to outside observers and any kind of sanctions/policies are just as likely to fall on "good faith-ed editors caught up in disputes" as they are on inherently "bad faith-ed editors". Policy change (1) is essentially dynamic in nature so I'll leave it to the section on Evolutionary game theory below. (2) is static so it's analyzed below.


Intervention in the static game

[edit]

The stick

The easiest way to model administrative intervention is to assume that administrators punish conflict. This means that if conflict - the outcome {F,F} - occurs, both players will suffer a cost, which in turn lowers the payoff they get in that case. "B" in the matrices below represents the cost of conflict (blocks, bans, sanctions, including the probability of these occurring; so it's best to think it in terms of expected value terms)

C F
C a,a b,c
F c,b d-B,d-B
Fig. 4, New Equilibrium with administrative action in Case 1

Assuming that B is large enough (i.e. the punishment is non trivial), in this case administrative actions "works" in that it eliminates the "bad equilibrium" from the two that exist without intervention. However, this is the one case where one would expect cooperation to realize on its own anyway since even before the "good equilibrium" {C,C} was focal anyway. So it's just making a good situation... still good.

C F
C a,a b,c
F c,b d-B,d-B
Fig. 5, New Equilibrium with administrative action in Case 3

In this case the threat of sanctions "solves" the conflict by facilitating the take over by the bad faith-ed editor. If Type 2 were to cooperate Type 1 would also choose C. But if Type 1 chooses Type C, Type 2 responds with F, because that way she "wins" and no conflict occurs. Type 1 could respond to F with F of her own - indeed, without admin intervention that's precisely what she would do - but in this case, the threat of the block deters her and she just gives up.


C F
C a,a b,c
F c,b d-B,d-B
Fig. 6, New Equilibrium with administrative action in Case 2

Here, since both users care more about "winning" than compromise (c>a) we have a certain symmetry and as a result there are multiple equilibria. However, each of the equilibria has the same character; one bad faith-ed editors "wins", the other "loses". Formally, in this case the administrative action has just turned a Prisoner's Dilemma into a game of Chicken. Again, rather than being solved through compromise, the conflict is avoided by making one party "win" the dispute unilaterally (the difference between this and the case above is that here we cannot predict which editor will win. In the case above we KNOW it will be the bad faithed editor who wins).

The basic lesson here is: "Intervention has the highest chance of success in situations where it is the least needed".

The carrot

The "carrot" kind of intervention rewards good behavior, compromise, rather than punishing bad behavior, conflict. It is represented by "R" (reward) in the matrix below. The asymmetry in the effectiveness of carrot and stick arises from the players' preferences and the fact that the compromise outcome {C,C} is the Pareto optimal one.

C F
C a R,a R b,c
F c,b d,d
Fig. 7, New Equilibrium with administrative action in Cases 2 and 3

The carrot can turn the Prisoner's dilemma from Cases 2 and 3 into a coordination game of Case 1 (it doesn't alter the equilibrium if you start in Case 1 to begin with - still 2 of them). Hence, this kind of intervention does in fact have the potential to solve battleground conflicts. However, it is not clear what kind of realistic "carrots" can be employed on Wikipedia which reward editors who find themselves in disputes for resolving these through compromise.

There's a kind of "Administrator's Dilemma" here. Stick policies work only where they're not needed. Carrot policies could work in problematic subjects, but they don't really exist. Additionally, in a dynamic setting, once a dispute or two is solved through the application of carrots the area ceases to be a battleground. At that point attention is likely to lapse and it may be thought that "the disputes have been resolved", the problem solved and there is no more need for intervention. But the analysis shows that as soon as the carrots cease being applied, a topic is likely to return to being a battleground.

Repeated

[edit]

Evolutionary

[edit]

With 2 Types

[edit]

Type As always play C. Type Bs always play F. Let denote share of population that is of type A.

Replicator equations (bit fudged for now)

( some restrictions to keep bounded between 0 and 1)

Where is the payoff to Type i. This equation just says that the share of Type As in the population, alpha (hence also the share of Type Bs. 1-alpha), changes according to (a function of) the difference in the payoffs between the two types. If "cooperating" consistently yields a higher payoff than "fighting" then the relative number of cooperators in the topic area will increase. If, however, the payoffs are higher for "fighting" then cooperative editors will become exhausted, frustrated and leave, or even turn into Type Bs themselves. The payoffs themselves are of course a function of this share; roughly speaking the more other "cooperators" are out there, the higher the average payoff to cooperating (since the probability you get screwed by a "fighter" is less).

This sets up the possibility of self-reinforcing dynamics such as vicious or virtuous "cycles"; if there are already lots of Type Bs, cooperators will have low payoffs, which will in further decrease their number and vice versa with good behavior automatically enforcing more good behavior. There's also a possibilities of an "internal" steady state where you wind up with a mix of fighters and cooperators and their shares are such that their payoffs are just evenly matched (this is an unlikely case however).

Need to generalize payoff notation.

For Type A (again assuming Player 1 is the Type A):

is the payoff to outcome being {C,C}
is the payoff to outcome being {C,F}
is not applicable as Type A's never play F
is not applicable as Type A's never play F

For Type B (assuming Player 2 is the Type B):

is non applicable as Type B's never play C
is non applicable as Type B's never play C
is the payoff to outcome {F,C}
is the payoff to outcome {F,F}

Simple linear case where:

( boundary conditions)

This is actually just the generic formula for this type of evolutionary game, which has wide application in Economics, Biology and Sociology. What matters here is its interpretation in the context of Wikipedia behavior. There are four possibilities here (1) { intercept, slope}, (2) ( intercept,-slope}, (3){-intercept, slope}, (4){-intercept,-slope}.

(1) means eventual equilibrium is ; "A's drive out the B's" - "compromise editors" drive out the edit warriors. A topic area characterized by these parameters exhibits peaceful resolution of conflicts, lots of good faith-ed discussion and disagreements settled through compromise. It is essentially the idealized version of how Wikipedia should work. Unfortunately it is also the most unlikely of cases, as shown below (it should not come as a surprise to anyone with even a smidgen of common sense that a Utopian ideal is also the most unlikely of outcomes).

(2) there is a single internal stable equilibrium. We have a mix of type A's and B's in the long run. These kinds of topic areas will exhibit mild but persistent conflict. If a particular topic area is in some way "inherently uncontroversial" then this is a very plausible outcome. But it is unrealistic with respect to topic areas that are inherently controversial.

(3) means there is one internal but unstable equilibrium and two stable corner solutions (another crap article). So momentarily you might have a mix of types but sooner or later (sooner!) the system will evolve to having only one type of editor. This is a kind of topic area that could have benefited from early and intelligent intervention. If it got it, the area is no longer controversial and hence outside the scope of this analysis (since the focus here is on controversial areas). If it didn't get it, it looks just like the dysfunctional case (4), in a parallel way to the static game where good faith is not enough. Path dependence.

(3).Graph illustrating the dynamics of the system under (3). There is one internal but unstable steady state and two stable corner states

(4) This is the opposite of case (1) where the eventual equilibrium ends up being , i.e. "B's drive out the A's" in a Wikipedia version of "bad money drives out the good money". Compromise minded editors wind up in an environment where any attempts at compromise get taken advantage off. As a result they leave the project or adopt a different strategy. Edit warriors thrive, EVEN THOUGH, they themselves would prefer a compromise solution. Given probable values this is the most likely outcome for any topic area that is inherently controversial. Unfortunately.

(4). Graph illustrating the dynamics of the system under (1). There is only one stable steady state at the left corner where alphe=0. In this case "Bad editors drive out the good editors" in a version of Copernicus' Law

Case (4) will occur if and . In "plain English" the first condition just says "edit warriors care more about winning their disputes" than "compromise minded editors worry about getting taken advantage off". It is the likely assumption (hence the realistic cases are (3) and (4)).

The second part is a bit more complicated. A necessary condition for the slope to be negative is that . This says "the average payoff to assuming good faith, trusting other editors but potentially being taken advantage off" is less than then "the average payoff to being a cynical asshole". This itself sounds sort of... cynical. I prefer "realistic". A good faith editor in these circumstances undertakes a substantial risk; if she trusts the other player and they turn out to be good faith-ed she gets the compromise solution (which is second best for her). But the worst case scenario is that she will be taken advantage off (the "nobody likes to get screwed assumption"). On the other hand, the bad-faith-ed editor is playing a version of a Min-Max strategy. If the other person is good faith-ed then they "win" and get to enforce their view point. If the other person is likewise "bad faithed" then the worst thing that happens is conflict. And by assumptions these kinds of editors enjoy conflict. Hence, it's a "tails I win, heads you lose situation". Again, this is the natural assumption in areas which are inherently controversial.

So even though this kind of analysis is extremely simplified (for example, by assuming only two kinds of editors and two kinds of unsophisticated strategies) it can already show light on why some areas of Wikipedia are prone to persistent conflict.