[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [DES] Random vs Linear Keysearch.
Peter Trei wrote:
>
> Before I start, let me say that the software I'm writing will search a
> 'chunk' of keyspace linearly. The chunk will be somewhere around
> 31 bits long, which is what I think that my index low-end system
> (486, 33 MHz) could check overnight. I do intend to make the
> code interruptable, so that it can suspend and restart work on a
> given chunk. It's just that no result will be forthcoming until the chunk
> is finished (unless it finds the key). BTW, while it's *extremely*
> cpu intensive, it's not intensive in memory, disk, or communications
> usage.
I still maintain (and I'm working on some proposals ---
it just takes more time than I have to form them well) that there
are strategies for partitioning the keyspace that are likely to be
more efficient, given how likely it is that the key is
constructed/derived/etc., using certain common techniques.
Now, if there is **NO** information as to how the key is constructed/derived, then
all the assumptions about a perfectly distributed keyspace result in the
conclusions we are all familiar with. But you know the adage --- smarter, not
harder, and all that. There WILL (almost always) be knowledge of the key
generation techniques. It is unlikely that geiger counters are in use by the
banks. **IFF** there is a-priori knowledge of the key gen techniques, then
it should be used if possible.
Now, I happen to be working on ONE SPECIFIC and common generation technique,
and there is some reason to believe that when it is used there is a more
efficient approach to partitioning.
There are a lot of caveats here ... and don't overgeneralize what I'm after -
I'm only espousing these ideas to stimulate other thought along the same
line.
Regardez.