[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: towards a theory of reputation



Hal writes:
>From: Scott Brickner <[email protected]>
>> Analytically, using an escrow agent doesn't change the utility
>> function.  It replaces the trading partner's honesty reputation
>> estimate with the escrow agent's (which is presumably higher, or why
>> use them?).  This is just a parameter substitution.
>> 
>> Whence comes the intractability?
>
>By the "utility function" I was referring to Wei's model in which each
>person has an idea of how much "utility" (a general summation of
>personal value and usefulness) they would get from another person, as a
>function of cost.  The utility function takes cost as input and returns
>"utiles" (or whatever) as output.  So, with this model, using an escrow
>agent would change the utility function; for a given cost, the utility
>of a person to me would change (say, if the person involved were
>thought to be dishonest, then the presence of escrow agents would make
>him more useful to me).  The utility function in Wei's model is a curve
>where the Y axis is utility and the X axis is cost.  Changing the
>importance of honesty will change the position and shape of this
>curve.
>
>I think it would be more tractable to have a model in which honesty
>played an explicit part.  We might even make assumptions about the
>mathematical relationship between honesty and overall utility - for
>example, that utility to me would be monotonically increasing with
>increased honesty of the other guy.

I had in mind that the utility function was being used by some agent to
determine its course of action.  Imagine the agent trying to determine
which of several services to use.  It may reasonably be expected to
evaluate the utility function for each one, and choose the one with the
highest utility.  "Reputation for honesty" is one parameter to the
function.  Price, turnaround, and reputation for quality are others.  A
smarter agent could consider "metaservices" which bundle the given
service with an escrow agent.  The net effect is to permit the agent to
replace the service's honesty with the escrow agent's for the
evaluation --- regardless of the internals of the model.

>What I mean is something like this.  Let t be the degree of trust
>necessary for a business relationship to be consummated.  For t=0, no
>trust is needed, and the relationship is such that neither party takes
>any significant risk - a cash sale, perhaps.  For t=1, in some sense
>total trust is needed, and a party can cheat the other with 100% safety.
>
>Now let h(t) be the honesty reputation of a person, so that the utility
>which people expect to receive from them gets multiplied by h(t).  For a
>person with a repuation for honesty, h(t) is close to 1 for all t.  For a
>person who seems dishonest, h(t) will go from 1 to 0 as t goes from 0 to
>1.
>
>This is all pretty hand-wavy, but the idea would be to come up with good
>strategies to estimate h(t) from a person's behavior, and good ways to
>choose what kind of behavior one should follow given the value(s) of t
>which are prevalent in the market.  This kind of analysis would lead you
>to focus on the importance of the amount of trust needed in a transaction.
>The underlying utility function is based on such traditional factors as
>productivity and reliability.  It won't change as we consider the
>variables of our analysis, because we have factored out the honesty and
>trust issues so that they are more explicit.  That's the kind of
>direction I was suggesting.

The strategy for estimating h(t) should be wholly independent of the
utility model.  Otherwise you'd be effectively unable to make efficient
use of rating services, which do such evaluations as their business.