[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: distributed data store, a la eternity





Ryan Lackey <[email protected]> writes:
> I've been looking at the current eternity server, and it's a nice
> thing, but I'm kind of convinced no system like this which does not
> allow for electronic payments for those taking the risks will be
> successful.  Perhaps I'm wrong, but I don't think so.  It takes
> money to run a secure data store.  (my current project depends upon
> secure distribued available data storage, so I've been looking into
> this for a while)

The eternity servers announced recently are experimental.  That is to
say they are designed as a vehicle for improvement.  They are not
completed systems yet.  I am most keen myself to see eternity servers
which are truly resilient; the implementation we have is a first cut.
The whole idea is for it to be improved.

As to the risks issue (being an eternity server operator might lead to
legal hassles, and therefore people need to make money to compensate),
I have three main comments on that:

1. Accepting money for activities might worsen your legal position as
   you are then personally offering services.  This issue has been
   argued about in the past in relation to the idea of having
   for-pay remailers.  I'm not legally qualified to comment on this.

2. With my design the document store is a separate distributed system
   which is now part of the landscape, and is well accepted as a home
   for all kinds of dodgy materials and discussions.  People are used
   to the fact that some alt USENET newsgroups are full of material
   which is illegal in some parts of the world.  If you start from
   scratch with a new document distribution method, your first few
   servers are easy targets, due to small numbers.  People will be
   intolerant of them, and the Feds will probably go for RICO
   conspiracy and lock all 6 up.  Game over before you start.  I agree
   that you can do much better theoretically with this model, but
   you've got to consider existing network social norms, USENET is
   part of the landscape.  You won't simultaneously manage to threaten
   all news admins in the world to close down alt groups, they are in
   too many countries, and admins will rebel, and there will be plenty left. 

   So using USENET allows you to point the finger and say "it's not my
   material, it's just USENET posts".  Much like altavista doesn't
   issue cancel messages for posts in USENET news, it's not their
   problem, they're just providing a view onto a distributed database.
   Or much like providing an open access NNTP server, or access to
   users only as done by thousands of ISPs the world over without
   ill-effect modulo localised attempts to force people to not carry
   some groups.

   Another less openly stated opinion of mine is that the way to do it
   is to start as I have, and add other document sharing channels
   later.  Keep some confusion for the observer as to where the
   material is coming from.  As long as it _might_ have come out of
   USENET news on the fly, you can use that as a cover argument.  If
   the shit hits the fan you can fall back to that.

   Probably long term I think you would want to keep USENET as 
   a fall back system, but definately keep it as the distribution
   channel (it is _the_ most secure way of delivering anonymous
   messages, where you want to protect the recipients identity:
   broadcast it).  Much better than sending mail via anonymous
   remailers.  Mixmaster is good, but not that good in the face of
   repeated posting by one individual of materials which seriously
   annoy a major government.

3. Last argument is the "good enough" argument.  If the above tactic
   of using USENET as a distribution mechanism, and having cacheing,
   and optionally charging for storage, with exchange of articles
   between servers survives censorship in practice, why bother with
   anything more complex.

   For example Ross talks about arranging it so that documents are
   reconstructed from secret shares so that individual servers don't
   know what they've served.  Well that's sexy stuff, and clearly more
   secure if you absolutely must conceal which server served what
   material, if it has definately got the material from a database
   which it owns.

   The basic threat model of my eternity server design is
   disposable nodes.  That is it's like the new low orbit satellite
   systems, the satellites fall in and burn up after a while, but it
   doesn't matter because it's part of the design to keep throwing up
   new ones often enough to ensure continuous cover.  Same for
   eternity servers.  Individuals will cave, but when they do other
   enraged netters will start some more in protest.

   Just look at the Scientologist battles.  It's simply amazing the
   bravery of the Scientologist detractors in face of types of
   harrasement thrown at them.  Despite this the Scientologists are
   losing in that their materials are all over the place.  This
   suggests to me that USENET could work just fine.

   Also there is the possibility of private eternity servers.  That is
   you have an eternity server for your own use only, or only for a
   group of friends, or only for users of your ISP.  That one won't
   come under attack, and provided USENET doesn't get censored you'll
   be ok.

   Perhaps pointing the finger at USENET will be an effective
   strategy.  Perhaps we'll find out one way or another in the next
   few months :-)  Empirical cryptography.


Now some ideas on moving towards a real Anderson Eternity Service (in
small stages, the first steps are far removed), I have several ideas
for where to go next:

a) Cache replacement policy based on number of accesses

   There is currently a cache of eternity documents.  Documents and
   servers are configurable to either cache: on, off, or encrypted.
   One solution to the question of what happens when you fill up that
   100Mb cache is to discard least accessed pages.  Popular stuff stays,
   and you have fixed storage requirements.  The mantra "Disk is cheap".
   It wouldn't kill someone these days to dedicate a 6Gb disk for an
   eternity cache.

b) Have the above cache policy augmented with pay for disk space with
   ecash storage fee included with post.  Realistically for either you
   need a way for eternity servers to ask each other for articles.
   Well they can do that easily enough via http, but you still need
   to know which servers are available.

c) Use news archival services as another database to query.  eg
   altavista, dejanews.  I am starting to think that this is not such
   a hot idea now, because it's too centralised, and the operators are
   already hostile to bloat of their archive space, and do things like
   truncate UUencoded documents, and PGP ascii armored documents already.
   I mean ok, I'm sure we could run a war of new crude stego improvements
   being countered by more advanced eternity article detectors, up until
   we get to a natural langauge looking stego encoder.  Texto is a simple
   example of one of these.

> Would anyone else be interested in working on a real reliable
> distributed data store, more like what Anderson described than the
> current Eternity server?  Specifically, something with payment
> systems built in, 

Payment is definately going to go in.  It will be necessary to control
volume somehow if it gets popular, and it allows providors to cover
their expenses.  Charging for document download might be an idea also
to cover bandwidth.

> a multiplicity of ways to pass messages from server to server, 

That has to come also, because of USENET expiring, and because of the
non-distributed nature of systems of using the news archival services
(dejanews etc).

Initial ideas are:

- have list of eternity servers, and query them randomly until you
find one which has the document.  Pro: simple solution, fast lookup,
Con: has easily accessible comprehensive list of who's running
eternity servers.

- build something in terms of remailers to allow eternity servers to
query each other anonymously.  Con: _slow_ lookup.

- build specialised DC-net or pipe net setup with eternity servers on
it.  Pro: fast lookup, harder to track who's serving what, Con: no
implementations yet, con #2 provides a fat target for distrupting,
take out the DC net/pipenet and the document exchange connectivity
fails.

- secret sharing documents over eternity servers

> and a multi-level design in which a maximum amount of anonymity is
> preserved?

One approach would be to treat anonymity as a separable issue from the
issue of hiding which server served which document (and issues of
secret sharing etc).

That is couldn't the viewer use crowds, or less securely
www.anonymizer.com.

However I think there is an opportunity to have the document
constructed by the eternity server nodes in such a way that only the
requester can reconstruct the document.  Eg each server encrypt the
share it is serving with the requesters public key.  Then only the
requester can recover the shares and hence the document.  You should
make sure it's forward secrect too.

> I'm not saying I don't like the current eternity server, just that
> I'd like to work on one with payment systems built in, 

I'm all for payment methods.  Perhaps hascash
(http://www.dcs.ex.ac.uk/~aba/hashcash/) would be a good experimental
currency to get it working with first, and then switch to real
anonymous ecash a bit later.

> and more closely following the original idea for an eternity server
> (which I thought was a great one).

I'm all for making things as resilient and crypgraphically strong as
you can.  However I think part of the reason for different opinions is
that you are addressing three issues at once:

  - non censorable document store
  - preventing people correlating material with the server which served it
  - making the users accesses anonymous

The design of my eternity server as it is at the moment trying to
address just:

  - non censorable document store

by relying on the low orbit satellite model for public access servers,
and the borrowing from the dififculty of censoring USENET news.

Mild attempts can be made to protect correlation of users with
material accessed:

  - enable ssl on the server

(Cheap solution I know, as it relies on the trustworthiness of the
operators of the server, and that the system hasn't been compromised
by hackers, and that you have possibility of MITM attack if the
servers certificate isn't authenticated, or even if it is with a more
determined long term MITM attack).

No attempt is made to prevent:

  - people correlating material with the server which served it

because that is outside the design criteria.  Who cares what they
correlate for the server.  The server is just forwarding you USENET
news articles, just like an open access NNTP server, with a bit of
reformatting.

If someone takes enough exception to that there will be plenty of
other eternity servers to take up the slack.

Please feel free to take and hack the eternity-0.08 source code, and
contribute, or hack it for your whatever purposes.

I'd be interested in any comments on this somewhat long post from
yourself, and other readers.

Adam
-- 
Have *you* exported RSA today? --> http://www.dcs.ex.ac.uk/~aba/rsa/

print pack"C*",split/\D+/,`echo "16iII*o\U@{$/=$z;[(pop,pop,unpack"H*",<>
)]}\EsMsKsN0[lN*1lK[d2%Sa2/d0<X+d*lMLa^*lN%0]dsXx++lMlN/dsM0<J]dsJxp"|dc`