[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Entropy vs Random Bits




I've been watching the debate and discussion unfold on usable sources of 
random data from environments, user actions, etc.  I have a vocabulary 
question (and something of a bone to pick as a mathematician and physicist). 


Usually, the term "entropy" is being used to characterize one of two 
different things: (i) random data, as in "300 bits of entropy," and (ii) the 
"randomness" of data (i.e. high degree of variance in a statistic drawn from 
it), as in "you can find a lot of entropy in the low order bits of a timed 
interval between keystrokes."  I suspect that there are other shades of 
meaning intended in other uses as well.

This is odd.  The term entropy describes an aspect of thermodynamic 
equlibrium in physical systems.  Although sometimes used as a synonym for 
"random," that definition is vernacular, not technical.  In fact, there is 
no meaningful relationship between "entropy" and random data of the type 
described in the postings related to seed values.  In the presense of a 
perfectly suitable and precise mathematical term (i.e. random), why invent 
new terms?  Why use them to mean at least two different things?

dvw