[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

more ideas on anonymity



The question as to what specifically to prohibit being posted
anonymously has come up, and is by far one of our most serious and
sensitive considerations. Of course, the decision is largely in the
hands of the anonymous server operator, but no generally accepted
guidelines currently exist, and we might help some people and advance
the cause by codifying `legitimate use'. First, let me assure you my
intent and preference is that it should be as liberal as possible. 
Let's look at some of the options (I'm tiptoeing on eggshells here,
please don't flame me too much):

1) operator makes decision for every posting brought to his attention. 

Things that would test this system: what about revisionists (not
ugly-enough term) who claim that the Holocaust never happened? Or
someone who is posting extremely provocative but fabricated data?  (The
first case happened on Prodigy--the censors let it through at one
point, and was documented in a column by Alan Dershowitz, famous
American lawyer defending e.g. Mike Tyson, and other major celebrities.
The second case happened with the now infamous challenger transcript
posting, where anonymous user of penet posts without any comment a
`transcript' of shuttle crew dialog during the crash.)

Here, I think one policy might be that if the poster seems to be
repeatedly and blatantly fabricating the data himself, maybe some
restriction or warning is in order. But if ever the poster includes
`real source' (no matter how trashy) from the outside world, and makes
it clear that they are not the originator, only the purveyor
(`messenger'), perhaps this is less serious.  (I think Mr. Helsingius'
current standards in this area should be held up as an outstanding
model of commitment to privacy and free speech.)

2) some kind of global system for keeping track of `abusive' posters.

Here are some interesting ideas--how about lists circulated among
anonymous server operators only (not public) that record barred users
by their email address or even real identity? The lists could be
categorized and tagged so that the administrator can prohibit use based
on the seriousness of the offense.  Here are some things that operators
`might' look at:

1) ad hominem attacks
2) flame baiting
3) lying outright
4) defying Usenet conventions: posting copyrighted material, binaries
to regular groups, massive amounts of data, etc.
5) number or existence of *any* complaints
6) `racist' remarks
7) terrorism
8) `harassment'
9) anything illegal in the poster's country (yes, tricky I know)

ad infinitum ad nauseam. Maybe we could try to organize the severity of
this kind of stuff, and classify servers as `type 1' or `type 2' and we
can get a feel for how liberal or conservative the operator is. The
operator would say which lists he subscribes to, and which lists your
email address will go on if you abuse the site. Really extreme
operators (like Mr. Kleinpaste) might actually be interested in
`public' lists -- abusers get their email addresses, along with the
offense, posted on the public list, i.e. `outed'.  Now, I think a lot
of this is pretty unpalatable, but we have things to gain by
formalizing these mechanisms, and as long as the anonymous user is
*warned* in the server intro-use message, and possibly even has ways of
redress, and has choice of different servers, then the system could be
fairly agreeable by most.  Remember, no one is preventing operators
from being conservative or liberal as they like, the only thing wanted
is adherence to their stated policies.

Look what we have to gain. Currently, there is a lot of censorship
(attempts? conquests?) going on behind the scenes, as a recent episode
here attests. No one really knows how effective in general it is
currently to hunt down and bar `abusive' users (hence a lot of
misinformation and paranoia about the effects of anonymous servers). If
we could have some *statistics* that show x% of nonanonymous users get
complaints and y% of anonymous ones do, this would be very useful for
gauging the social impact of our technologies.  (There could be some
very surprising results---I get the impression that many very
responsible people *prefer* anonymity, and conceivably the overall
complaints on anonymity could even be *less*).

3) Possibility of net.trials

Ok, so don't flame me too much on this one. But if `abusers' (this
could be for anonymity on a local server, but eventually involve to
other realms) were subject to a `trial by peers', imagine what this
could do to enhance the legitimate reputation of networks. Suddenly, a
judicial system. I certainly don't want to be known as an advocate of
bringing in lawyers and bureaucrats. Actually, that's precisely why I'm
proposing this, to prevent that scenario. Imagine that the net
establishes these formal self-regulating mechanisms.  People in the
real-world law enforcement would be much less likely to become enraged
by perceived abuses when they realize that there are intrinsic
mechanisms for quelling the psychopaths (uhm, maybe, anyway).  Also, if
people weren't added to blacklists just by the caprice of one operator
but after a perceived fair `trial', people at other sites would be much
more willing to enforce the sentences of suspension, expulsion or
whatever.  An electronic trial by peers? (with voting at the end?)  A
very interesting idea.  Each server may develop a kind of peer or
family structure, keeping kin in line. 

Maybe everyone that replies to an anonymous message could vote in their
header whether to get rid of the user, with the default `one vote of
approval' (limit voting regularity). Approves add, complaints subtract.
The user starts with some initial balance. If he gets down to zero,
*poof*.  Lots of `approval'? No problemo. Post something really
outrageous? You might get enough zaps to lose it all.  Imagine, this
could improve the accountability of users in *general* (the mechanisms
could be applied to new Usenet groups, for example, or if very
trustworthy and fair even logins themselves).

I've been a bit vague and ambiguous in some of these statements. This
is because, as I hope has become clear, the kind of things that start
out on anonymous servers could eventually have a much greater scope, so
that it `behooves' us to develop effective and dynamic mechanisms for
self-regulation.  Keep in mind a lot of these things are happening
already albeit in much less formal ways. For example, the convention is
to send complaints to the system adminstrator at a site regarding their
users, and they act as judge and jury (or use whatever other local
procedures are in place).  The user may or may not be able to justify
their actions (redress).  There is already a loose confederation of
cooperation between administrators, esp. over extremely abusive
posters.  We already get somewhat public `trials' of extremists, where
people put forward all the evidence on Usenet and argue both sides. 
`Enforcement' and `punishment' sometimes consists of revoking logins,
feeds, or whatever.  I think we ultimately stand to gain by
`formalizing' a lot of the currently informal mechanisms in place.  

My feeling is that if we don't head off these issues at the pass, so to
speak, Real World (tm) courts will start deciding them for us. Let's
develop something we can be proud of and will be a model of excellence
for the future, and not something frail and unstable.


Perhaps our anonymous motto:

``I disagree with what you say but will defend to my death your right
to say it.'' --Voltaire 
(written pseudonymously)