[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GUT and P=NP



James A. Donald writes:
> Ray writes
> > 1) By definition, if something can be computed by a turing machine,
> > then it is an algorithm (Lewis and Papadimitriou)
> > 2) a quantum computer can be simulated by a TM with exponential 
> > slowdown. (claimed by you on the Extropians list, but also
> > claimed by Feynmann I believe, not about qm computers, but qm systems
> > in general)
> 
> True.

  Therefore it is an algorithm.
 
> > then by (1) and (2), it follows that
> > 3) quantum computers are algorithmic (if not, it would contradict
> > 2) and possibly 1)
> 
> Suppose our quantum system has thirty two bytes.
> 
> Then a classical simulation of our quantum system would require
> 2^257 words of memory
> 
> The computer would require more matter than exists in the universe.
> 
> Each step of the simulation would require 2^514 steps by the computer,
> which even for a computer constructed of very tiny components out
> of all the matter in the universe would still require vastly longer
> than the entire lifetime of the univers.

   We are not talking about physical computers, we are talking about
turing machines. If there is some *finite* deterministic process to
get from the initial data to the final result, no matter how long it
takes, it is an algorithm. I'm sure I could hand you a composite
number that would require a computer larger and older than the
universe to factor. Does that prove that none of our current factoring
algorithms are actually algorithms, or that brute force isn't an
algorithm?

   If you have a different definition of "algorithm" then perhaps your
argument is right, but to me, an algorithm is a process to
get from A to B, regardless of how long it takes.

> > 
> >    It doesn't matter how slow the turing machine runs the simulation
> > because we allow an arbitrary time along with the infinite tape
> > to complete the computation. 
> > -Ray
> 
> It does not sound like a very useful algorithm, nor is it one
> that is easy to describe.

  Usefulness is a matter of time complexity, not a condition for
membership in the set of algorithms.
 
> The difference is like the difference in my example of light
> flowing through a grid, as against a fourier transform etc,
> but the difference is enormously greater.
> 
> You say it makes no difference by definition.  I say such
> definitions are misleading when we discuss how problems are
> to be solved.

  Those definitions were invented to solve problems in the first
place.

  I can't think of a single thing which is non-algorithmic
except true randomness or non-determinism. Since no finite axiom
system can prove whether a string is truly random, no algorithm is
possible for generating nor proving them.  (anything with
infinite logical depth would also probably suffice) Err, I may
be mistaken since I recall that Chaitin said that you need
N bits of formal axioms to prove that an N-bit string is
"elegant" (the smallest representation), but I also recall
somewhere that a truely random string needs an infinite
set of axioms. Perhaps Tim can shed some light.

  Perhaps another example is a physical process able to solve the
halting problem. Imagine a time traveling UTM. Call it as a
subroutine. All it does is run your algorithm program and wait. If the
program ever halts, it sends the signal back in time, otherwise it
runs forever.  Thus, you feed the TT-UTM the algorithm you want to
check. If the program halts, the signal travels back in time from the
far future to arrive during the next "tick" of your current program.
If you receive no such signal, then either the universe died before
the algorithm halted, the machine broke down, or the algorithm doesn't
halt.

  The traditional "proof by contradiction" of the insoluability of the
halting problem doesn't work here. The algorithm used to test the
contradiction simply doesn't halt. It calls the TT-UTM recursively forever,
and creates an infinite number of them. In fact, this questions
the validity of the halting proof itself since the
contradiction derived isn't a valid input to the halt checking
machine in the first place, or, the halting proof disproves
logically the existence of time travel! Inputing an algorithm
to the halt checker which calls the halt checker should be
considered an exception like "division by zero" In which case,
the halt checking TT-UTM returns "exception: input algorithm  
recurses forever"  Thus, two new classes of algorithms are
developed. Those checkable by a TT-UTM and those which are not.
Those which are not should be left up to an even more
powerful machine. ;-)

(this violates the conditions of Church's thesis since the machine
can perform an infinity of calculation at each step. Oh well.)

-Ray
"Everything is an algorithm, even you!"