[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Verifying Privacy as an Upload/AI?
(Posted to both extropians and cypherpunks.)
Is there any way for a process running in a computer to verify
that it has privacy? How could an AI, for instance, ever know
that it had privacy? How could a person preparing to be
uploaded provide for their continuing privacy?
Assume these things, for the sake of argument:
Strong public key crypto.
Truly tamper-proof computers.
Capability-based operating systems with proven protection
between processes. We might ask Norm Hardy for
a rundown on some of the wonderful things that
are possible in these types of systems.
You might even assume that...
Humans can memorize things, and these things can't be
decoded from their uploads' memory dumps.
(See note on torture below).
The process/person seeking assurance of privacy is
capable of being downloaded into a humanoid
robot with enough compute power.
Can you prevent the bad guys from copying you and torturing
information out of the copy? Can you be secure even if they
can do that?
Even with the best assumptions, I find this question tough.
But then I'm dense sometimes.
-fnerd