[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: netscape's response




Christian (that's me) writes:
| I think it is important to bring together factors of the user _and_
| the environment, preferrable an environment that reaches as far from
| the local site as possible. This makes "jamming" of the random seed
| selection process harder. 
| 
| The other problem in gathering random bits for a seed is that most
| bits are visible by someone else close enough within your environment.
| Interarrival times of packets are fine, but anyone can observe them
| with quite a good accuracy. How do you escape the "local environment
| problem"? 
| 
|                               . - .
| 
| One wild idea that I just got was to have servers and clients exchange
| random numbers (not seeds of course), in a kind of chaining way. Since
| most viewers connect to a number of servers, and all servers are
| connected to by many clients, they would mix "randomness sources" with
| each other, making it impossible to observe the local environment
| only. And the random values would of course be encrypted under the
| session key, making it impossible to "watch the wire".
| 
| Problems:
| * watch out for "multiply by zero" attacks by a rogue server/client.
| * watch out for "almost singular values" in the same way.
| * only let one source contribute a certain amount of randomness, like
|   (key length)/(aver # of peers).
| * never reveal your current seed, only a non-trivially derived random 
|   value from it. (of course)
| * make sure your initial seed is good enough, or the whole thing is
|   broken.
| * perhaps save part of the previous session state into a protected
|   file, to be able to keep up the quality of the initial seed.
| 
| I think I like it, perhaps not from a practical point of view as much
| as the 'non-attackability' of it. Its quite cypher-a. 

Bill Stewart answered:
| 
| Be _very_ careful with this approach - it's the kind of thing that a
| rogue server or client might abuse to find out randomness or other state
| information about the clients or servers connecting to it.

Of course you have to be very careful, as you say. Did you see my
problem-section in the original letter? I included it above. Since
then I have realized that the 

   | * only let one source contribute a certain amount of randomness, like
   |   (key length)/(aver # of peers).

really should be

   | * only let one source contribute a certain amount of randomness, like
   |   (large entropy buffer)/(aver # of peers).

and that you should only give out approximately the same amount of 
randomness to the neighbour, as you point out below.

| At minimum, only give out some of your randomness, XORed with some
| arbitrary value to scramble the range and then hashed before sending,
| so that the recipient can't find out the values you're using.

My approach solves part of the problem of "the observable local
environment" problem. 

Jeff's reply to this suggestion might be somewhat dangerous, if
the exchanged 'randomness bits' are the challenge/responses in the
exchange. (Based on his remark of not needing to change protocol.)
You would arguably not want to have the loop

         RNG --> "unguessable chall/resp" ---+
          /\                                 |
           +---------------------------------+

I would say that the only acceptable solution would be to have


(viewer)consumer <-------------------->consumer (srv)
          /\                             /\
           |                              |
   --->  RNG1 <---------------------->  RNG2 <----- RNGn
          /\                             /\
           |                              |
         RNGx                           RNGy

separating the "building up" of randomness from the 
consuming phase of that built up randomness, the actual
part which has to be totally unpredicate.

/Christian