[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: towards a theory of reputation



Wei Dai <[email protected]> writes:

>The first step toward a theory of reputation is defining what reputation 
>is.  The definition should correspond closely enough to our common sense
>notion of reputation so that our intuitions about it are not completely 
>useless.  I think a good definition is this: Alice's reputation of Bob is 
>her expectation of the results of future interactions with Bob.  If 
>these interactions are mainly economic in nature, then we can represent 
>Alice's reputation of Bob by a graph with the horizontal axis labeled 
>price and the vertical axis labeled expected utility.  A point (x,y) on 
>the graph means that Alice expects to get y utils in a business transaction 
>where she pays Bob x dollars.  Given this definition, it is natural to say 
>the Bob's reputation is the set of all other people's reputations of Bob.

This is an interesting approach.  However this seems to fold in issues of
reliability with issues of quality and value.  If I have a choice of two
vendors, one of whom produces a product which is twice as good, but there
is a 50% chance that he will abscond with my money, I am not sure how to
value him compared with the other.  It seems like the thrust of the
analysis later is to determine whether people will in fact try to
disappear.  But that is not well captured IMO by an analysis which just
ranks people in terms of "utility" for the price.

>A reputation system consists of a set of entities, each of whom has a 
>reputation and a method by which he changes his reputation of others.  
>I believe the most important question for a theory of reputation to answer 
>is what is a good method (reputation algorithm) by which a person changes 
>his reputation of others.  A good reputation algorithm must serve his 
>self-interest; it must not be (too) costly to evaluate; its results must 
>be stable; a reputation system where most people use the algorithm must 
>be stable (i.e., the reputation system must be an evolutionarily stable 
>system).

I am not sure about this last point.  It seems to me that a good
reputation is one which is most cost-effective for its owner.  Whether it
is good for social stability is not relevant to the person who is
deciding whether to use it.  ("But what if everyone behaved that way?
How would you feel then?")  It may be nice for the analyst but not for
the participant.

>In a reputation based market, each entity's reputation has three values.  
>First is the present value of expected future profits, given the reputation 
>(let's call it the operating value).  Note that the entity's reputation 
>allows him to make positive economic profits, because it makes him a 
>price-maker to some extent.  Second is the profit he could make if he 
>threw away his reputation by cheating all of his customers (throw-away 
>value).  Third is the expected cost of recreating an equivalent reputation 
>if he threw away his current one (replacement cost).

I don't really know what the first one means.  There are a lot of
different ways I can behave, which will have impact on my reputation, but
also on my productivity, income, etc.  There are other ways I can damage
my reputation than by cheating, too.  I can be sloppy or careless or just
not work very hard.  So the first two are really part of a continuum of
various strategies I may apply in life.  The second is pretty clear but
the first seems to cover too wide a range to give it a value.

>Now it is clear that if a reputation's throw-away value ever exceeds its 
>operating value or replacement cost, its owner will, in self-interest, 
>throw away his reputation by cheating his customers.  In a stable reputation 
>system, this should happen very infrequently.  This property may be 
>difficult to achieve, however, because only the reputation's owner knows 
>what its values are, and they may fluctuate widely.  For example the 
>operating value may suddenly decrease when his competitor announces
>a major price cut, or the replacement cost may suddenly decrease when 
>he succeeds subverting a respected reputation agency.

It would be useful to make some of the assumptions a bit clearer here.
Is this a system in which cheating is unpunishable other than by loss of
reputation, our classic anonymous marketplace?  Even if so, there may be
other considerations.  For example, cheating may have costs, such as
timing the various frauds so that people don't find out and extricate
themselves from vulnerable situations before they can get stung.  Also,
as has been suggested here in the past, people may structure their
interactions so that vulnerabilities to cheating are minimized, reducing
the possible profits from that strategy.

>One way to answer some of these questions may be to create a model of 
>a reputation system with a simple reputation algorithm and a simplified 
>market, and determine by analysis or simulation whether it has the 
>desirable properties.  I hope someone who has an economist friend can 
>persuade him to do this.

It might be interesting to do something similar to Axelrod's Evolution
of Cooperation, where (human-written) programs played the Prisoner's
Dilemma against each other.  In that game, programs had reputations in
a sense, in that each program when it interacted with another
remembered all their previous interactions, and chose its behavior
accordingly.  The PD is such a cut-throat game that it apparently
didn't prove useful to try to create an elaborate reputation-updating
model (at least in the first tournaments; I understand that in later
versions some programs with slightly non-trivial complexity did well).

What you might want to do, for simplicity, is to have your universe
consist of just one good (or service, or whatever), with some producers
who all have the same ability, and some consumers, all with the same
needs.  Where they differ would be in their strategies for when to
cheat, when to be honest, when to trust, and when to be careful.

At any given time a consumer must choose which producer to buy from.
The details of their interaction would appear to greatly influence the
importance of reputation.  Maybe there could be a tradeoff where if the
consumer is willing to pay in advance he gets a better price than if he
will only provide cash on delivery.  (Unfortunately it seems like the
details of this tradeoff will basically determine the outcome of the
experiment.  However maybe some values will lead to interesting
behavior.)  Producers who want to cheat could do so by offering greater
discounts for payment in advance, offering low prices in order to
attract as many customers as possible before disappearing.  Consumers
might rightly be suspicious of an offer that looks too good.

Maybe it could be set up so consumers could cheat, too.  No, I think that
is too complicated.  Then producers would have to know consumers'
reputations and I think it gets muddy.  Probably it would be simplest to
just have producers have reputations.

Hal