[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Security Scruffies vs Neats, revisited
This is an attempt to restart the discussion in a slightly different direction.
I've been giving the topic some thought since Tim's truncated essay
appeared. But when I re-read it just now, I realized that I read in my own
interpretation of "scruffy" and "neat" to this.
IMHO, the critical property of AI scruffies is that they believe in the
value of some notion of emergent behavior -- if you build it right, it'll
surprise you and do something clever and unexpected to fulfill its
objectives. The "neats" have to know exactly why the behavior emerged, but
the scruffy methodology almost never allows such a detailed analysis to
succeed.
Intuitively, I tend to think of scruffies as trying to build biological
processes or concepts into computers. The goal seeking built into IP
packets, for instance. The Internet is an impossible artifact, if you view
distributed computing with '70s blinders. Nobody would want to cede control
so much to largely autonomous routers. Once you drop an IP packet into the
"system" it generally gets to its destination or dies of old age trying.
When I try to apply this style of thinking to security, I find myself going
towards layered defenses. These goal seeking, semi biological processes are
somewhat failure prone, so you probably need a set of them to make things
"safe." Falling back to biology, we see "security" in the various defensive
mechanisms developed in plants and animals.
But now things start to break down. "Security" these days means more than
defense -- it means access control. "Let me in" as well as "Keep them out."
How do you "tune" or "train" a semi-biological mechanism to exert such fine
control? It's not clear to me that you can. When I read Kevin Kelley's book
"Out Of Control" I kept wondering who wanted to live with his semi-biological
toasters and heating plants, tolerating burned toast and frozen bathrooms
until the devices finally "learned" how to behave. (but I shouldn't get
started on that book -- I once wrote 20 pages of notes about how bogus I
thought it was).
In other words, the problem may be with the concept of security itself.
Defense seems to be a biological concept, but security is not. It's too
artificial, involving the reflection of some abstract and arbitrary human
intent. Constructing a subsumption device to collect pop cans is one thing,
but building one to construct a cuckoo clock (or play doorman) is something
else.
Rick.