[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

NAS Crypto study



	Last week, the National Research Council posted a question to
cypherpunks, asking for opinions.  Here's mine (draft form) , I invite
comments before I send it in.

Adam


NAS crypto question 1

?  How, if at all, do capabilities enabled by new and emerging
?  technology in telecommunications (e.g., key-escrow
?  encryption technologies, digital telephony) and electronic
?  networking make it _easier_ for those who control that
?  technology to compromise and/or protect the interests of
?  individual end users?  Please use as the standard of
?  comparison the ease _today_ of compromising or
?  protecting these interests.  We are interested in
?  scenarios in which these interests might be compromised
?  or protected both individually and on a large scale.  Please
?  be sure to tell us the interests you believe are at stake.


	There are several areas in which the privacy of users is being
changed by new technologies.  The control of the new technologies is
fundamental to privacy issues.  When control is held by service
providers, interests of the end users fall by the wayside.  When that
control is distributed, then the end users, naturally, have the
ability to protect their own interests.

	Control of technology does not need to be held by service
providers, the government, or any other centralized entity.  It can be
taken, today, by individuals who are concerned enough to do so.  I
will use as my basis for comparison the ease of compromising the
interests of an individual who chooses to protect their communications
with the tools available to them, mainly PGP and the remailer network.
These tools are not yet trivially easy to use, but they are out there
and they are being improved.  Since those tools are available today to
those who are interested, I will use them as a baseline against which
centralized 'security' can be compared.

	The FBI wiretapping bill creates a new power for government -
the right to tap phones.  The change is a subtle one with large
implications.  It creates an additional array of points of failure for
a possibly secure network.  Law enforcement agents today have the
ability, acquired yesterday through an accident of technology, to tap
phones.  That does not mean that ability should be preserved.  It is
widely known that this ability has been, and probably still is,
abused.[1] (What do you call an illegal wiretap?  An anonymous
informant.)  GAK (Government Access to Keys) codifies a similar
accident; that networks are insecure becomes a design feature.

[1] I'll be adding references to Bamford & Kahn.

	In such centrally controlled system, there will be points
where the entire system can fail.  Those points of failure could
expose an entire population of users to information leaks.  They may
be well protected, but even the NSA has had agents defect.

	This model is in start contrast to the situation today, where
individuals can take responsibility for their own encryption.  If
there is no centralized back door, no database of keys, LEA fields,
and the like, then the security of each key must be breached where it
is likely to be best protected, namely in the possession of its user.
I would understand the value of my private keys to me, and not
disclose them.  Thus we have made it substantially easier to damage
the interests of end users, while not adding anything to their
protection.

	You could argue that the government has an excellent track
record in protecting information.  This is only partly true.  The
government did an excellent job of covering up radiation tests on the
mentally ill; it has done a poor job of concealing Social Security
numbers, which the IRS prints on the outside of tax documents,
claiming the US mail is secure[2].  Only when there are institutional
interests at stake does the government show any interest in protecting
information about citizens.  Doubtlessly, accidental or illegal
revelation of keys would be carefully classified, along with the names
of the effected individuals.

[2] I'll be adding a reference to RISKS digests.

	The bureaucrat, not having a personal stake in the security of
the keys, will be more lax than an individual.  No one believes that
agents of the government will look out for them as well as they look
out for themselves.  If they did, perhaps we'd all be happy to let the
IRS compute our taxes.  It would sure make life easier.  But we don't.
The individual is always the best protector of their own interests.

	To hammer on the point, there have been repeated cases of INS
employees selling green cards, FBI agents who create rules of
engagement later found unconstitutional, and agents of every three
letter agency in Washington have sold out to the Russians.  To quote
an NSA historian who I spoke with about Aldrich Ames at the NSA
museum, "Its amazing how cheaply someone will betray their country."

	If we mandate backdoors in a system, they will be found and
exploited.  Give end users control of the technology, including source
code and access to algorithims, and they are empowered to choose a
level of security that is appropriate.  The government can not do so,
and should not try.

	A few scenarios to illustrate better my points.

*******************

	Postulate the existence of a rich and powerful drug lord.  He
has millions of dollars to protect his large shipments.  Lets call him
Pablo.  Pablo decides he needs to listen in on DEA conversations.

	Plot A: put in place a system of GAK (government access to
keys.)  Lets call it Clipper, for convenience.  Lets also say that the
DEA is using Clipper to protect its phone conversations about Pablo.

  	Pablo finds a low level employee of some key escrow agency.
Lets call him Aldrich.  Aldrich likes fast cars.  Pablo buys Aldrich a
fast car, in exchange for 8 or 10 keys, easily smuggled out on a
floppy disk.  Aldrich has just broken the law, and will doubtless be
providing keys to Pablo for a very long time.  Pablo, meanwhile, is
laughing at the DEA agents, to whose daily phone meeting he listens.

	Plot B: There is no GAK.  The DEA uses PGP, (having gotten
copies from European FTP sites so as to not export it to its agents in
South America.)  The DEA agents hunting Pablo are the only ones with
their keys.  They know what Pablo does to DEA agents.  Pablo can't get
their keys, and our heroic agents catch Pablo, and throw him in jail
forever.

	(Naturally, we can substitute any well funded enemy of law
enforcement for Pablo.  The KGB works well.)



*******************

	Second scenario.  A group of terrorists plan to blow up the
world trade center.

	Plot 1: Our terrorists are smart, and don't call attention to
themselves.  Despite the FBI's ability to tap their communications,
there is no reason to be watching the soon-to-be terrorists, and they
set off a bomb.

	Plot 2: For some reason, there is probable cause, leading to
the issue of a warrant.  The FBI taps into the communication lines,
and discovers that the Terrorists are using VoicePGP.  They obtain a
warrant, and through the use of an ELINT monitoring device near the
computer in question, and get all the information they need.

	This scenario is different in that the terrorists are in
locations known to the FBI, whereas Pablo does not know where the DEA
agents are.  If the location of the terrorists is not known, it is
difficult to tap into their communications links.


	In closing, by only by allowing end users to continue
controlling their own security technology, can you avoid creating a
system where the interests of large blocks of users can be easily
compromised.



Adam Shostack


-- 
"It is seldom that liberty of any kind is lost all at once."
						       -Hume