[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Will Monolithic Apps Dominate?
On Sun, 20 Jul 1997, Robert Hettinga wrote:
> In an absolute sense, of course, more processing power is more software
> waste. My Mac wastes more cycles than I can physically count in a lifetime
> waiting for my next keystroke, and, after more than half a lifetime at the
> keyboard, I am a pretty fast typist.
The problem is that it wastes all those cycles until you hit enter (or OK)
and only then begins whatever arduous task, usually with a slowly moving
progress bar or dialog box. What I really hate are the installs that are
going to take 15 minutes but insist that you don't do anything else while
it is copying files. Unless you already started something in the
background which will happily run, but there is no shrink window widget.
That will probably all go away "RSN", but it will take something like a
web based process dispatch where you are filling in a form (maybe without
knowing it) and it is submitted to be dispatched to the least busy
processor in your network. Plan 9 was trying to do something like this,
and the current SMP stuff at least attempts to do scheduling - we still
have this technology in a very mature state from mainframes and things
like vaxclusters.
But to go back to your comparison of lines v.s. nodes, the network and
busses are getting faster and the processors are two at about the same
rate, so both lines and nodes are getting cheaper, though not uniformly.
If lines become cheaper (in bits/sec/$) than nodes, we will have 10 CPU
boxes for 40 keyboard/display boxes, or some other appropriate ratio.
Otherwise some combination of distributionware will become available so
you can run cpu intensive apps (e.g. spice electronic modeling) like DES
keyspace searches, though on your local net. Even something like a
"batch" program for Windows NT, but the GUI paradigm makes clicking on Go
a little difficult :).
There is another problem with software being purchased (or rented?) in the
model you describe. It becomes cheaper to purchase it in bulk (e.g. one
transaction giving access to a os and a library, and have the most used
components cached locally) than to go out and buy each piece as I need it.
I have a toolbox in case I need to fix something. If I had to drive to
store for each tool as I needed it, the sunk costs would then exceed the
price of the tool. It would get worse if I tried to determine the best
price or best tool from among several stores. I purchase one tool box
which handles 99.5% of the cases I am likely to see and have only one set
of sunk costs.
E$ transactions have a very tiny transaction cost, but it is not zero, and
because it requires crypto it is noticable in CPU time and cycles. If you
had to purchase 100 components to run Netscape (minimally the graphics,
http parsing, and a few other things first, then every time you clicked on
a new type of object), each having to be transacted and downloaded, it
would be worse than the existing bloatware. Much like the cost of Email
is not zero, but a large amount is included in my monthly ISP payment
since the cost of tracking exceeds the cost of transmission. A large
amount of software may eventually become a public good. But as I may
occasionally need a special tool which is not in my box, I may want access
to special software or capacity.
--- reply to tzeruch - at - ceddec - dot - com ---