[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Eternity (was Re: CPAC, XtatiX, and the Censor-State)





Mike Duvos <[email protected]> writes:
> In alt.cypherpunks, Censorship Suck <[email protected]> writes:
> 
>  > My previous suggestion was to create "virtual" WWW sites
>  > (when necessary, possibly as a stop-gap measure when there
>  > is a censorship attack).
> 
> I read your earlier post, and find the notion of hosting Web
> sites on Usenet intriguing.  [...]
> 
>  > By virtual, I mean by posting to USENET, probably encrypted,
>  > with the decryption key following a couple days later.
> 
>  > It involves considerable development for either a newsreader
>  > or preprocessor, but then, the work is implied. A
>  > "gathering" design is necessary.
> 
> Conceptually, seamlessly extending the Web over Usenet is not
> complicated.  Each document could be posted as a single PGP
> conventionally encrypted article, with some reasonable convention
> for encryption keys and message ids that would enable links to be
> followed. Browsers could be modified to maintain a open
> connection to an NNTP server, and to fetch and decrypt articles
> specified by an nntp:<message-id> link in lieu of using http.
> 
> Suppose I post my favorite banned Web pages to
> alt.anonymous.messages on a weekly basis, and distribute browser
> plug-ins and patches to enable Usenet to host Web content.

You two must've been snoozing when I posted the Eternity service
announce back in April.  I implemented something which does pretty
much exactly what you're discussing.  I'll include a repost below.

> I forsee the following glitches.

I must've been doing something right, I read your practical glitches,
and went yup x 4 :-)

So beyond implementing it, which I've done a first cut of, the next
problem is getting people to use it, to advertise it so that people
know it exists, to get people to put lots of "interesting" (=
otherwise censored, so possibly pretty interesting) materials on it,
so that people find it interesting to browse documents on it.

I'm working on an article for Phrack 51 on it, as I thought the Phrack
audience a suitable one for folks interested in publishing interesting
materials (warez, hacking, cracking hints, tips, liberated information,
and some more on the less savoury side perhaps, CC numbers, calling
card numbers, etc).

A real boon would be a working demo system, and I still haven't set
one up myself.  As I said back in April: any takers.

My implementation doesn't require browser plug-ins.  It would require
that or a local proxy for better security, but a quick 'n easy is
probably a better starting point.  (Much like web based remailer
interfaces are ok things to introduce people to remailers, almost zero
set up, easy to use etc.)

One thing I didn't see you mention was the idea to use the web based
news archival services to get around the problem of news expiry.  I
haven't implemented those parts, as I was kind of hoping the idea
would take off and someone would thereby be motivated to contribute
the code.

The only novel thing is that the URL hashes which I was using for a
subject line (read reposted announce for why I do this) don't seem to
be indexed by the search engines (neither dejanews, nor altavista).
An idea I had to work around this was to use something like texto,
where you just use a group of words to encode the number.  Or I think
you can get away with numbers under a certain length.  So perhaps in
encoding url hashing to: 22596363 B3DE40B0 6F981FB8 5D82312E 8C0ED511 you
could convert to base-10, and group into whatever digit numbers:

Subject: 576283491 3017687216 1872240568 1568813358 2349782289

Then do a web search for:

576283491+ 3017687216+ 1872240568+ 1568813358+ 2349782289+

(altavista syntax + is word required).

Realtime uncensorable web courtesy of altavista :-)

Adam


reposted eternity announce:
======================================================================
To: [email protected]
From: Adam Back <[email protected]>
Subject: [ANNOUNCE] Eternity server 1.01 alpha release
--text follows this line--

Annoucing the `eternity server'.

Alpha testers wanted.  See source code at:

	http://www.dcs.ex.ac.uk/~aba/eternity/

System requirements are:

  unix
  perl5
  news source (local news spool or access to nntp server)
  pgp (2.6.x)
  web space
  ability to install cgi-binaries
  ability to schedule cron jobs a plus but not essential

Unfortunately I have not got a service up and running for you to play
with because I don't meet those requirements (in cgi capability on the
server I would otherwise like to use).

If someone wants to set one up to demo, and post the URL that would be
useful.

The local news spool stuff is tested better than the nntp server
stuff, but it is all there and the nntp stuff was working at one
point.  Remember this is alpha code.  If you can hack perl and want to
stress test the nntp server stuff or the rest of it that would be
helpful.

What is an Eternity Service?

(Some of you may recognize the name `Eternity Service' from Ross
Anderson's paper of that name.  What I have implemented is a related
type of service which shares Anderson's design goals, but has a
simpler design.  Rather than invent a new name, I just borrowed
Anderson's, unless anyone can think of a better name. Anderson has the
paper available on his web pages (look for `eternity') at
http://www.cl.cam.ac.uk/users/rja14/ Also I heard that Eric Hughes
gave a talk on UPS `Universal Piracy Service' I presume this has
similar design goals, though I have not heard further details.)

I have implemented this to give something concrete to think about in
terms of creating something closer to Andersons real eternity service
in robustness.

The eternity server code above provides an interface to a distributed
document store which allows you to publish web pages in such a way
that the attacker (the attacker here is a censor such as perhaps a
government, perhaps your government) can not force you to unpublish
the document.

It's architecture is quite simple.  We already have an extremely
robust distributed document store: USENET news.  We already have the
ability to post messages to the document store anonymously: type I and
type II (mixmaster) remailers.  So theoretically we can publish what
ever we want, just post it to USENET.  However retreiving the data is
cumbersome.  We must search USENET with a search engine, or look
manually for the document, the document will expire after some short
time.  You may still be able to find the document in a search engine
which archives news for longer periods such as dejanews or altavista.

However, it is not as user friendly as a web based interface, there
are no hypertext links between related documents, there are no inline
graphics, etc.

So my `eternity server' is a kind of specialised web search engine
which acts as a kind of web proxy and provides a persistent web based
interface to reading web pages stored in encrypted documents posted to
different USENET groups.  The web search engine provides persitent
virtual URLs for documents even if they move around with different in
different news groups, being posted via different exit remailers.
Signatures on documents allows the user to update document.  The web
proxy has options to cache documents, and otherwise fetches from
a USENET newsspool or nntp news server.

Technical Description

The system offers different levels of security to users.  All
documents must be encrypted.  The subject field of the document should
include the SHA1 hash of the document's virtual URL.  (All eternity
virtual urls are of the form:

	http://[virtualsite].eternity/[documentpath]

that is they would not function as normal urls because the use the
non-existant TLD "eternity".  You may be able to use this facility to
get your browser to look at a particular eternity server to serve
documents of this form).

The lowest level of security is to encrypt the document with PGP -c
(IDEA encryption) with the passphrase "eternity".  This is just
obfuscation to make the documents marginally harder to examine in
USENET without using the eternity service to automate the process.

The next level of security is encrypt the document with the SHA1 of a
1 prepended with the URL.  ie 
	
	passphrase = SHA1( "1". "http://test.eternity/doc/" )

This means that you can not read documents unless you know the URL.
Of course, if you can guess the URL then you can read them.

The third level of security is to additionally to one of the above two
options to encrypt the document with another passphrase of your
chosing.  Then you can give the passphrase only to those amongst your
circle of friends.

Additionally support is given for signed documents.  This is important
because once you've published a web page you might wish to update it.
And yet you don't want other people updating it with a blank page (or
otherwise changing your page) as this would be as good as the ability
to remove the page.

If you include a PGP public key with a document, the eternity server
will associate that PGP key with the hash of your documents URL.  Any
attempts to update that document will only succeed if signed by the
same PGP key.

The document server has the option to cache documents.  Cacheing can
either be turned off (in which case documents are always fetched from
the news spool, this is not that inefficient as the news spool may be
local, and the server keeps a database of USENET group/article number
against document hash, and so can directly fetch the document), or
cacheing can be turned on, or cacheing can be set to encrypted, in
which case documents in the cache are encrypted with a passphrase
derived from the SHA1 of the document's URL by prepending a 1 to the
URL.  In the encrypted mode if the server operator is not keeping
logs, he can't decrypt the documents in his own cache.  If cacheing is
off there are no documents in the cache, only in the USENET news
spool.

In addition some users will have a password which is pasted into an
extra password field on the field

Users can set their own per document cacheing options (they can set no
cacheing to override the options of on or encrypted on the server), or
they can set encrytped to override cachine turned on, etc.

There is a document index at eternity root ( "http://eternity/" ), and
users may optionally request documents to be included in the directory
of eternity documents and virtual hosts.  It makes sense to include
the root document of your set of web pages in the index, and to leave
the rest out of the index.  Then you can link to your own pages from
your original page.

Relative URLs work, inline graphics work, site relative URLs work, and
it includes crude support for mime types, so most normal web data
types will work as normal.

The document format is:

[flags]
[pgp key]
[eternity document]

The flags are can be any of:

URL: http://test.eternity/example1/
Options: directory or exdirectory
Cache: on/yes/off/no/encrypt/encrypted
Description: An example eternity document

The pgp key is an asii armored PGP key, and the eternity document must
also be ascii armored and may optionally be signed (with the
acompanying pgp key).  The eternity document can also be encrypted
again inside that layer with a user key which need never be revealed
to the server.

The overall document is encrypted with PGP -c password "eternity", or
with a password obtained by prepending a 1 to the URL as obtained like
this.

% echo -c 1"http://test.eternity/example1/" | sha1

The flags, pgp key and eternity document can be arbitrarily jumbled
up in order.

The eternity server source code includes an examples directory.

The eternity server has a configuration file which is heavily
commented: eternity.conf.


Threat Models

Think of this as a specialised search engine. If you don't like the
documents you find, then change your search terms, don't complain to
one of the many search engine maintainers! The Eternity Service is
designed to facilitate publishing in a web based form of non
government approved reading or viewing material, or the publishing of
non government approved software, or other "liberated" documents,
programs, etc.  Each user can easily install their own search engine
in a shell account for their own use.

Newsgroups are very robust against attack because they are part of a
distributed system: USENET news. It is difficult to remove a USENET
newsgroup (the Scientologists know, they tried and failed, and they
have seriously deep pockets, and attitude to match). For the eternity
service it probably doesn't matter that much if you do get rid of a
particular newsgroup, eternity documents can be in any
newsgroup. Somehow I don't think you'll succeed in getting rid of all
USENET newsgroups, ie in shutting USENET news down. I'm not sure that
a major government could succeed in that even.

Information wants to be free. Long live USENET news.

Improving robustness

The main problem with this system that I see is that documents don't
live long in USENET news spools as expiry dates are often 1 week or
less.  So to use the service you'd have to set up a cron job to repost
your web documents every week or so.  Some documents remain in the
cache and so survive as long as there is interest in them.

An immediately easy thing to do is to include support for fetching and
looking documents up in altavista's news archive and in dejanews
archive.  This shouldn't be that difficult.  The eternity server
article database already includes newsgroup and message id against
document hash, all that is required is to formulate the requests
correctly.  If anyone wants to add this, feel free.

A mechanism for eternity servers to query each other's caches would be
one mechanism to think about.

Another would be to charge anonymous digicash rates to keep documents
in the cache a couple of $ per Mb/year.

Also you could perhaps think about charging for acccesses to the
service and passing a royalty back to the document author.

There is no `ownership' of eternity virtual domains, or of document
paths.  If Alice puts a document at http://books.eternity/, there is
nothing to stop Bob putting other files under that virtual domain,
such as http://books.eternity/crypto/.  However in general Bob can't
make Alice's document point to his URLs.  Bob can't modify or remove
Alice's pointer from the eternity directry http://eternity/.  But he
can add his own entry.

There can however be race conditions.  If Alice puts up a document on
http://books.eternity/ which points to an inline gif, but hasn't yet
put up that inline gif, Bob can get there first and put up a picture
of Barney until Alice notices and changes here pointer to point to the
picture she wants to point at.

So one solution would be to only accept documents which are signed by
the signator of the next domain up.  First come first served for the
virtual domains, the key signing the root dir of the virtual domain is
checked for each file or directory added under that.

Note even if the URLs are exdirectory, this need not be a problem as
you can just try the path with one directory stripped off.  Could be
done.  Is it worth it though?  It would add overhead to the server,
and there is minimal distruption someone can cause by sharing domains.

Perhaps it is desirable for someone to own a domain by virtue of being
the first person to submit a document with the hash of that domain
together with a key.

Comments?



The rest of this document is things on the to do list, if you feel
like helping out, or discover a limitation or bug.

To do list

It could do with some documentation.  And some stress testing.  The
file locking hasn't really been tested much.  NNTP may be flaky
as I made some mods since last debugging.

It would be nice to have a CGI based form to submitting documents.
Paste signed ascii armored document here.  Paste public key here.
Click radio buttons to choose options.  Enter URL here.  Type in
newsgroup to send to. Click here to submit through a chain of
mixmaster remailers.

There are a few bugs in the above system which ought to be fixed, but
this is alpha code, and they are on the TODO list.  One problem is
that the flags aren't signed.  A change of message format will fix
that in a later version.  The basic problem is that you can't check
the signature until you've decrypted and obtained the key.  At which
point you have to decrypt again.  A simple fix would perhaps to be to
put the public key outside the outer encryption envelope, but that is
unattractive in terms of allowing author correlation even by people
who don't have passphrases or don't know URLs.  Better would be to
just decrypt once, obtain the key if there is one, and then check the
sig again.  The server already keeps a database of PGP keys (not using
PGP's keyring) it stores keys by the SHA1 hash of the armored public
key and indexes against document URL hashes.  In this way public keys
do not need to be included in later document updates.  This could be
used to check a signature on the outer encryption layer.  Then only
the first post needs to decrypt twice (once without key available to
check sig, and then again with key available).  As a nice side effect
this protects against the PGP 0xdeadbeef attack (or 0xc001d00d as Gary
Howland has on his public key).

Another problem is when scanning a newsgroup, if the scanned message
can not be read it just stores it for later, so that when someone
accesses the server who knows the URL for that document it can be
decrypted and scanned for good signature and updated then.  However
the code only has a place holder for one document at the moment.  That
should be changed to a list and appropriate code to scan the list when
the document is accessed.  Once the newest document is found, the list
of documents to be scanned can be discarded and the newest article
stored against the hash in article.db.

Another more subtle problem is that the code may update documents in
the wrong order if multiple versions are posted to different news
groups.  The articles are scanned one group at a time.  So if the
groups are scanned in order alt.anonymous.messages followed by
alt.cypherpunks, and version 2 of the document is in alt.anon.msgs,
and version 1 in alt.cp, then it will update the document with the
older version.

Another to do list entry is the cache replacement policy.  There isn't
one, your cache will grow indefinately as documents are never removed.

Another thing to do is the altavista and dejanews lookup via HTTP by
message-id.  All the place holders are there.

If anyone wants to tackle any of the above tasks feel free.

The program dbmdump prints out .db files in ascii so you can see what
is there.

eternity -u is suitable to run as a cron job, or just at the console.

eternity -r resets everything, clears out the cache, empties database
etc.

eterity?url=http://blah.eternity/blah

is the form to use as a cgi.  It handles both GET and POST methods.

There is a log file for debugging, see eternity (the perl program) for
how to set different logging opitons. You can log most aspects of the
program for debugging purposes.

Adam

-- 
Have *you* exported RSA today? --> http://www.dcs.ex.ac.uk/~aba/rsa/

print pack"C*",split/\D+/,`echo "16iII*o\U@{$/=$z;[(pop,pop,unpack"H*",<>
)]}\EsMsKsN0[lN*1lK[d2%Sa2/d0<X+d*lMLa^*lN%0]dsXx++lMlN/dsM0<J]dsJxp"|dc`