[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: SSL trouble

>From: Piete Brooks <[email protected]>

>We were using ALPHA code when we started ....

I didn't realise that.

>(4) is still applicable isn't it ?
>What tells people to stop, or do they go on for ever ?

A message in a newsgroup, a mailing list, or a web page.  Even if you
can mount a denial-of-service against this, it will only make people
continue the search uselessly.  It won't prevent you from finding the

>>The main drawback of the random search is that the expected running "time"

>where "expected" is some loose average .....

Nope.  It's what I get when I do the math (basic probability theory)
to find the expected running time.  But I could be wrong.  I'll try to
write it in TeX and put it on my web page.

>>I suspect that sequential searching from a random starting point would be
>>much worse in the case of many independent searchers.

>Convince me (please) ....

That would be hard because I've been thinking about it, and I'm less
and less convinced myself.

>> In conclusion, I think random searching is the way to go.
>It has its advantages -- yes. Did you use it for Hal1 ?  :-))

No, but I had few machines and fast connections (and even then, I did
have some network problems).  But if you think sequential searching
can work, let's do it.  I don't think we have to worry about
deliberate attacks for the moment, and the factor of two is
significant.  My previous message was based on the assumption that it
would be hard to get rid of the server overload.

Maybe we should use random searching as a fallback mode in case of
network problems.  It cannot hurt, except that it makes the programs
more complex.

-- Damien