[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Apologies and Clarifications -
First of all, I would like to sincerely apologize for my personal attacks
on Perry. It is the first time that I have ever done that and I apologize
for wasting you time and mine in that regard. He is obviously going to try
to pontificate and rehash the old forever. Accordingly, to paraphrase
Chief Joseph, "from where the sun now sits, I will fight Perry Metzger
no more, forever." I further apologize for any offense that you feel that
I may have caused.
As to other matters:
Instead of replying to the attacks being fabricated
by Perry and other I have opted to recapitulate the matter
at hand as perspicaciously as my limited abilities permit.
As we know, Shannon was not a prolific writer. We also know
that IT has not advanced very far during the 50 years since
Shannon's nascent ground breaking work. Why? Because of the
very issues that are at the heart of this matter. The
differences between black, white and all the multitude hues
of grey in between. I am extremely skeptical, and believe
many readers may be too, of people that go around flaunting
their understanding of Shannon, entropy and its application in
absolute unequivocal mathematical terms. I hold that such a
position, except in terms of delimiters, does not reflect an
understanding of Shannon's impartations but au contraire, it
is evidential proof that they do not have any real
appreciation of Shannon at all. That is in no way intended
to denigrate Shannon's very important contribution to our
endeavors.
I embrace the predicate that Shannon in terms of mathematical
certainty has only defined the boundary conditions of the
playing field within which we must play our games. If we
stray outside that field, we are errant in the absolute
sense. If we however play within those mathematical certain
boundaries, we may be in error for other reasons, but Shannon
cannot be cited as the casual agent.
The recondite "parti pris" hijacking of the definition of the
term OTP on the basis of conflict with Shannon is ludicrous,
when such usage is apodictically within Shannon's delimiters.
I have witnessed several definitions of an OTP which I
maintain are entirely consistent with Shannon. Among such
are: An encryption where any ciphertext may in fact be an
an encrypted expression of any plaintext of the same or
lesser length; or an encryption where the only information
obtainable from the ciphertext is the maximum plaintext
length; and so forth. All of these definitions have the
essential quality that at the bit level, they can be reduced
down to the equation:
Pb xor Ub = Cb
where Pb is a Plaintext bit, Ub is an Unknown bit and Cb is
the resultant Ciphertext bit. Of course the combining form is
normally XOR, but could be table lookup, modulo additive, or
other. The important thing is that so long as only Cb is
known, it is impossible to determine either Pb or Ub.
If somehow the Ubs can be subtracted or cancelled out, as for
instance by any discoverable reusage of Ub, even in a
derivative form, then you may be able to recover some or all
of the Pbs.
The assertion that an Ub must be an Rb, a truly Random bit,
is irrelevant ideology unless there exists the means to
convert the Unknown bit, Ub, into a Known bit, Kb.
That is the sublime basis of the IPG algorithm, which is to
generate a stream of "unknown bits" which cannot be
analytically reconstituted in the absence of the OTP
generator key, and possibly other related key like
parameters. The additional caveat of course is that the key
cannot be guessed, nor can it be derived from brute force
methodologies, both of which are patently impossible with the
IPG algorithm. You do not have to be a Stephen Hawking to
comprehend why that is a fact; each of you, with possible
minor exceptions, will be able to discern that because it
quickly becomes self evident as you ply the algorithm.
There are those who will advance the contention that the
ciphertext cannot in fact represent any possible plain text.
That is incontestably true. In fact with reasonably long
ciphertexts, say for example an encrypted FAX, unlike a
hardware sourced OTP, there will only be one meaningful plain
text that can be derived from the resultant ciphertext.
Does that mean that a given ciphertext might not represent
any plain text of that same, or lesser, length? One would
think so, since it obviously cannot. The problem being
though, is exactly how do you go about proving that any given
plain text cannot be represented by a known ciphertext. There
are two prongs to such a proof, both of which must be
substantiated in any specific case at hand:.
1. Inclusionary Proof - the one most frequently advanced
alone, which simply stated says that since each encryptor
stream may represent any possible encryptor stream of
that same length, then the ciphertext may be the XOR
product of any plaintext of same said length. Obviously,
a hardware sourced OTP DOES meet that proof. Also. it is
patently evident that the IPG software OTP DOES NOT meet
the inclusionary proof test.
2. Exclusionary Proof - which is equally valid, and simply
stated consists of two propositions:
A. Can any plaintext be encrypted with the OTP? Obviously
the answer is YES for BOTH both hardware sourced OTPs
and the IPG software sourced OTPs.
B. If 2.A is true, is it possible to exclude any
plaintext possibility?
Obviously in the case of hardware sourced OTPs, no
plain text can possibly be excluded.
What about the IPG software OTPs?
. One method that you could theoretically be used to
exclude a possible plaintext for the software OTP,
would be to by brute force try all possible keys,
until the correct one is obtained. That however,
would be difficult to say the least, as a matter of
fact it is impossible:
If every subatomic particle in our known universe was
a Googol, 1 with 100 zeroes after it, of Cray T3E
computers - and each of them had been brute forcing a
partition of the possibilities at the rate of 1
Googol of them per picosecond which could not be done
- but assume that it could - and they had been doing
so since the Big Bang say 20 billion years ago - and
they were to continue to do so for the next 20
billion years, that is beyond the life expectancy of
our solar system/galaxy, and perhaps the universe,
then approximately 10^322 of the possible OTPs would
have been tried.
Thus it would still take 10^34000 times as long as
that to try all of the possibilities - that is they
would have tried only one part in 1 with 34000 zeroes
after it - not theoretical infinity by any means, but
a very, very large extra-astronomical number. Brute
force is patently impossible.
. Another possibility might be to guess it - anyone
recognizes that would not be practical, and how
would you know that you were right. All of the IPG
OTP generator, aka Encryption, keys are themselves
OTPs.
. Another possibility would to be to analytically
exclude the invalid possibilities. As stated
previously, and as you can discover for yourself,
this reduces itself back down to to the exact
equivalent of the brute force method set out above.
With the algorithm, there is no difference between
analytically exclusion and brute forcing - they are
one and the same.
So in fact, any known ciphertext may indeed be an encryption
of any plaintext of that same, or lesser, length, because you
cannot prove that it is not a specific plaintext.
Some will contentiously assert that the key and other
parameters might be hacked. While we include within the
system facilitators that reduce that risk to a minimum, and
to near zero if you use simple precautions, it is true that
the hacking possibility exists. However, that possibility
exists for RSA, PGP, hardware sourced OTPs, or any other
form of security or encryption. We are better positioned in
that regard than others but in terms of the human element,
as we all know, there is no such thing as perfection.
With warmest regards,
Don Wood