[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: SSL trouble



> From: Piete Brooks <[email protected]>
> 
> One of the things that the latest brloop does is make a call to the master
> server asking for a list of servers to contact :-))
> 
> Note that it is a list, and it tries them in order (all A RRs).

Wouldn't this result in the slaves higher in the list being hammered?
Perhaps you want to do something simular to what the later releases of 
bind do with machines with multiple names, and round robin the list.
If you had a list with hosts A, B, and C, the first request would get
ABC, the next BCA, the next CAB, and the next back to ABC.  That would
distrubute the work between the slaves a bit better.

> 
> > Then all of the transactions from that point would take place between the
> > brute and the slave:)
> 
> Currently just all the "allocate" transactions -- I haven't written my
> ACK reflector yet, so all ACKs go direct th the ACK master.
> 
> > The slaves would each be delegated large chunks of the keyspace,
> 
> No -- the slaves will not "be delegated" (as in pre-assigned address space),
> they will just ask the master for it as they need it.
> Sure, the'll do it in reasonable sized chunks, but not (2**16)/16 ....

Actually this is what I meant, that they would ask for it.  My idea would
be that when a slave is asked for keyspace, if they don't have enough
they'd ask for the next large chunk.  That way the central server doesn't
ever have to deal with small requests.
> 
> > but not keyspace/numslaves.  Maybe 1/16th or something like that, and could
> > ask for more when their space was depleted.  Periodically, perhaps when
> > requesting more key space, and/or when a timer pops, the slaves could report
> > results.
> 
> Nah - results still go direct pro tem.

You might consider it:)

> 
> > What I mean is that every so often they'd report even if they didn't need
> > more keyspace yet, iff they had any new stats to report.
> 
> Sure.
> 
> > The nice thing here is that the work of the master and of the slaves is
> > almost the same.i
> 
> You got it !
> 
> > The slaves don't have to do the initial assignment of slave,
> (slave -> slaves I assume)
> 
> > and the master doesn't have to report results, but everything else
> > is the same.
> 
> Yup -- code sharing !
> 
> > With careful design you could use the same daemon for both
> > with a command line argument to tell it if it was the master (-m) or the
> > slave (-s).
> 
> Well, not even that !
> 
> The slaves don't have the config file with the key info in it ...
> 
> > Of course I'm sure you see that this allows you to add as
> > many levels as you want to the hierarchy.
> 
> Indeed.
> 
> BUT ....
> 
> These cache servers are asking for non trivial amounts of keyspace.
> As such there should not be *too* many, and then need to be "managed" ...
> If one crashes, the logs need to be scanned to see how to restart it (so that
> it starts by doling out the segments that it had no sub-doled to its clients).

Quite right.  I'd assume that the first level list of slaves would be controlled
by you.  If you're careful enough a slave should be able to go down and come
back up without losing any state at all.  All brutes/slaves talking to it
should be able to continue on with no loss of information.  I would put 
an exponential backoff on the time between retries for the brutes talking
to the slaves as well as the slaves talking to the master.  (With a limit
for the amount of backoff of course.)  If you can't talk to someone you
might sleep for 8 seconds and retry, if you still couldn't back off to 16,
the 32, then 64, then 128, etc...the maximum might be somewhere around ten
or fifteen minutes, so that within ten or fifteen minutes of crashing and
being restarted everything would be humming along with no manual intervention
required on any of the lower levels.

> 
> > A slave doesn't care whether a slave or a brute talks to it.
> 
> Indeed -- that's how it was designed ...
> 
> However, note that with big cache servers (as opposed to Local CPU Farm servers
> where all clients are the same "ID") reports of sub-allocation have to be
> passed back to the root :-(

That's a good point.  If you want to keep track of who has what, it all has
to get back to the root eventually.  If you use my idea of having the
slaves cache the information until the next time they'd be contacting the
root anyway, (or whenever the timer elapses,) then you greatly cut down
on the number of small packets seen by the root, (and each level of slaves
when their's a hierarchy).

> 
> > You could make the slave software available as well, and a site with many
> > machines could have only the slave contact the master to get assigned a
> > slave to talk to, and could configure all of their brutes to talk to
> > their own slave.
> 
> Indeed -- the Local CPU Farm cache server is just about ready for ALPHA testers
> 
> > Software like this is easy to write, (and fun), and we should go for it:)
> 
> Done ...
> 
> > Of course I do everything like this in C++, but I suppose perl would be the
> > most portable.  It's a shame it's so aethestically displeasing to the eye.
> 
> Yeah -- but being based on C, C++ didn't stand much chance ...
> 
> > perl's never a pleasant read.
> 
> ... but better than C++ -- sure.

<snicker;>  Sound like we could have a religious war if we wanted, but this
isn't the right list for it:)  (sigh;) Maybe we should move this portion of
the discussion to alt.my.favorite.language.rules.and.yours.of.course.sucks.
I'm not really a snob about it...I still think cobol's great for some 
purposes, I just prefer coding in C++.

Patrick
   _______________________________________________________________________
  /  These opinions are mine, and not Verity's (except by coincidence;).  \
 |                                                       (\                |
 |  Patrick J. Horgan         Verity Inc.                 \\    Have       |
 |  [email protected]        1550 Plymouth Street         \\  _ Sword     | 
 |  Phone : (415)960-7600     Mountain View                 \\/    Will    | 
 |  FAX   : (415)960-7750     California 94303             _/\\     Travel | 
  \___________________________________________________________\)__________/