[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: coding and nnet's
> [email protected] (Timothy C. May) writes:
> At 7:52 PM 11/10/95, Bill Stewart wrote:
>>Schneier's 2nd edition says "Neural nets aren't terribly useful for
>>cryptography, primarily because of the shape of the solution space.
>>Neural nets work best for problems that have a continuity of solutions,
>>some better than others. This allows a neural net to learn, proposing
>>better and better solutions as it does. Breaking an algorithm provides
>>for very little in the way of learning opportunities: You either recover
>>the key or you don't. (At least this is true if the algorithm is any
>>good.) Neural nets work well in structured environments when there is
>>something to learn, but not in the high-entropy, seemingly random world of
>>cryptography."
>>And he doesn't give any references.
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> This paragraph sounds a _lot_ like what I wrote in sci.crypt a while back
> on the usefullness of AI and neural nets for crypto. Sounds almost like
> exactly the paragraph I wrote, in fact.
As it happens, I saved that one, because I thought the "needle standing up
from a flat plain" metaphor was so apt; I've used it in a couple of talks
without remembering where I got it. Here's your message to sci.crypt:
From: [email protected] (Timothy C. May)
Subject: Re: Neural nets & Crypto
Message-Id: <[email protected]>
Date: Wed, 10 Aug 1994 17:53:13 GMT
Neural nets are not likely to do well with modern ciphers (e.g., RSA,
IDEA, DES, etc.), mainly because of the shape of the solution space.
Instead of the "rolling hills and valleys" that neural nets (and
related methods, such as genetic algorithms, simulated annealing,
etc.) do well in, the solution space for modern ciphers offers very
little in the way of "learning" opportunities: you either have the
solution (the key), or you don't.
Think of a needle standing up from a flat plain...a NN or any other
hill-climber could wander for years and never find it.
I suspect there are uses in peripheral aspects, such as guessing
passwords (when people have not picked high-entropy passwords, but
have instead used familiar names). Or in traffic analysis.
But the move in modern cryptology is definitely away from using
anything with "structure" that can be learned. Put another way, neural
nets and such work well in structured environments, where there's
something to _learn), but not in the high-entropy, seemingly random
world of encrypted data.
--Tim May
A subsequent message (which I also saved) dealt with genetic programming
and was also interesting.
Jim Gillogly
21 Blotmath S.R. 1995, 01:22