[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RemailerNet



I think Jim Dixon has some interesting ideas in the RemailerNet.  But I have
a philosophical difference.  I dislike solutions where the users have
to put too much trust in the remailer operators.  IMO, as much control
as possible should be left in the hands of the users.  To make the system
easier to use, mail agents should be enhanced to be more powerful, rather
than moving more power and control into the remailer network.  Trusting
a remailer to choose your path through the network is like trusting the
sysop at your BBS to create your PGP key for you.  Maybe it's OK a lot
of the time, but isn't it better to do it yourself?

Jim Dixon <[email protected]> writes:

>Generally I would expect gateways to introduce themselves to one
>another privately and negotiate an understanding.  Part of this will
>normally take place off the Net.  This is an infrequent event, and
>so can be time-consuming and expensive.  The basic web of trust is
>that between gateways.	Once gateways had entered into a relationship,
>there would be frequent encrypted private traffic between them
>which would maintain the trust.

This is just the opposite of what I would like to see.  I don't want the
remailer operators getting too friendly.  That makes it all the easier for
them to conspire to track messages through the net.  I'd much rather choose
far-flung remailers whose operators have never heard of each other.  Get
one from Helsinki and the next from Timbuktu.  Choose a path which will
minimize the chances of all the remailers being corrupted.

>Gateways could be started up by anyone and some postings to
>alt.RemailerNet would be spurious.  The "gateway" could be a sink,
>just tossing traffic sent to it, or it could copy all messages to a
>TLA before forwarding them.  The user-gateway web of trust would
>therefore be far more problematical. I think that this would function
>as a market, and unreliable and untrustworthy gateways would be driven
>out over time.

I think this is right, although as I posted elsewhere I don't think usenet
is the best structure for announcing remailer availability.  (As I said,
I'd rather see a few sites volunteer to do pings and publish the results,
or even better would be widely used software packages which let people
do their own pings.)  But the question of remailer reliability is hard.
What is the giveaway if a remailer is secretly archiving messages while
claiming not to do so?  How could you ever tell if the NSA infiltrated
your favorite remailer?

One possibility would be occasional physical audits, in which a remailer
reviewer visited the site, looked at the software, checked the system for
security holes, etc.  This would be quite expensive, obviously, but perhaps
eventually the remailer infrastructure would be extensive enough that this
kind of checking could be done.  Think of it as "Consumer Reports" for
remailers.  (Similar privacy audits might be de rigeur in the future for
other net resources, such as file banks or compute servers.)

>Compiling a list of remailers, sure.  But if you let the user control
>how messages are chained, you are inviting real traffic analysis.  The
>user should only be able to specify his destination and the level of
>security desired.

What?  Again I would reverse this.  The user should have maximum control
of his path.  It's up to him to choose a random one.  Random number gen-
erators are widely available.  (I can get you a bargain on a used Blum-
Blum-Shub.)  If he has to trust the first remailer on his path, then if
just this one remailer is subverted, he's lost all his privacy.  By choosing
his own path no one remailer knows both the source and the destination of
any message.  That is the key.  No one must have those two pieces of
information.  Giving it all away to the first remailer means giving away
all your security.

>> The fact is, that even remailers exchanging mail _can_ be spoofed, if not
>> quite as easily as the newsgroup idea. It seems to be a premise of cryptographic
>> protocols and schemes, that you've got to assume a worst case and get a system
>> working where even under the worst case, everything works.

>Well ... if you follow this line of reasoning too far, you are just
>saying 'nothing can be trusted, so don't bother being careful'.

The point, though, is that with Chaum's scheme you have security if even
one remailer in the network is honest.  The chain becomes as strong as its
strongest link.  Systems which put more responsibility and power into the
remailer network often can't achieve this.  They have single-point failures
where one compromised system can defeat the efforts of all the others.

>If I
>were running a remailer and someone posted his address in a public
>newsgroup and said "hey, here I am, and I run a really good remailer"
>I wouldn't trust him just because he signed it.  I would get in touch
>with him, ask around about him, maybe run some low-security traffic
>through his remailer for a while.  Then after some time I would raise
>my estimate of his trustworthyness.  If he dropped traffic, if someone
>reported that something that they had sent privately had been
>compromised, I would drop him.

Yes, I think this is a reasonable and cautious attitude, but instead of
saying "If I were running a remailer..." I'd say it should apply "if I
were _using_ a remailer".  There may be rating services and other sources
of information to help users, but ultimately the decision should be theirs.
One of the lessons of cryptography, IMO, is that you don't get security
by farming out the hard work to others.  The user should take responsibility
for his own security.

I'm getting too tired to reply to the rest.  I think Jim has a lot of
creative ideas and energy but I'd like to see it directed more towards
empowering end users rather than putting so much reliance on trustworthy
remailer operators.

Hal