There's a hole in your reasonoing

```>
>
>
> On Thu, 3 Aug 1995, Nathan Loofbourrow wrote:
>
> > Nathan Zook writes:
> >  > > And is there any way to build trusted system out of small, verifiable
> >  > > pieces?  Since the way they're connected could also be questioned, I
> >  > > suspect that when you put enough of them together it's just as bad as
> >  > > the case of a single, monolithic program.  But this isn't my area, so
> >  > > I don't know.
> >  >
> >  > No.  This was essentially proved during the first third of this century.
...
> There is "active research".  Why is a mystry to me.  Godel's proof was
> the completetion of several works.  On of the earily demonstrated that no
> axiom system can be demonstrated to consistent by a weaker one.  Now the
> "reasearch" in this area has consisted, in part, of translating
> algorithms into statements in axiomatic systems.  The problem is that
> either we cannot prove that these systems are consistent or they are
> extremely limited in what they can do.  (In particular, recursion seems
> to be anthema.)  But the word proof in the previous sentence has to be
> taken with a grain of salt, because any axiom system that we use to prove
> things about another axiom system has to be at least as complicated.

You hit the nail right on the head when you said:
"or they are extremely limited in what they can do"

That's exactly the point.  We cannot prove programs with general purpose
functionality to be secure, becasue they are not.  But we may well be
able to prove a lot of security properties about programs that are not
general purpose.  For example, a Web server that only does GET and a
gopher server (not gopher plus) and a mail server may all fit the bill.
An by coincidence, these are exactly the sorts of programs we want to be
able to prove security properties about.

> This is why the "not a Turing machine" assertion that the "Professor" is
> important.  We know that Turing machine is undecidable, so if we want to
> limit behavior, we can't have one.  BUT---we don't know that being a
> Turing machine is equivalent to having "unpredictable" behavior.
> Furthermore, a "proof" of the "not a Turing machine" assertion is going
> to have to be done by--you guessed it--a computer.  And this computer is
> running a program which definitely IS a Turing machine, if it is capable
> of "proving" that other (suitably non-trivial) programs are not Turing
> machines.

I think in the case of simple (i.e.  short and written for the purpose)
programs these proofs could reasonably be done by hand.  In fact, I
think we could create a theorum verifier that we could prove to only
verify true theorums as true.  Some theorums would never be proven one
way or the other, and others might be proven false, but some things,
particularly the ones we need to bootstrap the theorum proof technology
and things like the properties of a secure W3 server, could fit intop this
schema.

> Why must this be done on a computer?  Because the program under
> consideration is thousands of machine instructions long.  And each
> instruction will be translated into dozens of statements in the axiom
> system.  So any attempted proof will be beyond human ability.

Not in the case of programs like the secure W3 and Gopher servers.  They
are under 100 lines long.  They are also designed to allow easy proof of
the desired properties.

...
> So in each case, complex (in the technical sense) behavior is exhibited
> by outlandishly simple systems.  Sohow the _interactions_ of these simple
> and predictable systems become unpredictable.

But this is only true for certain classes of systems.  By designing other
classes of systems explicitly designed to not have those properties, we can
build up substantial systems with demonstrable protection properties.

> That is why I consider this to be a closed subject.

I thionk you should reopen your thinking.

--
-> See: Info-Sec Heaven at URL http://all.net
Management Analytics - 216-686-0090 - PO Box 1480, Hudson, OH 44236
```