[Raw Msg Headers][Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Alpha-release of the new scheduler for the ZMailer (2.99.17)
Hello folks,
I have now stored zmailer-2.99.17-951001.tar.gz
at ftp://ftp.funet.fi/pub/unix/mail/zmailer/,
and I expect it to appear quickly at a couple
other sites containing the material -- I never
remember where, but when you do CWD into that
directory, the ".message" -file will tell you.
This source dump contains my running version from
nic.funet.fi "cluster", and to quote Linus Torvalds:
"This is another from 'ItWorksForMe(TM)' series"
(he was talking about Linux-1.3.30, though ;-) )
The new scheduler will not run as a default
scheduler, however if you do "make install-bin"
(or "make install", which installs more..)
you will get an ability to execute:
/etc/zmailer nscheduler
to start it.
Here are a couple outputs from the mailq port
with a running system -- tails of those outputs,
this extra will disappear some day, I plan..
smtp/*/0 Threads: 10/24 Procs: 6/12 Idle-procs: 0/0
local/*/0 Threads: 1/1 Procs: 0/0 Idle-procs: 0/0
smtp/*funet.fi/0 Threads: 1/2 Procs: 0/0 Idle-procs: 0/0
smtp/*.fi/0 Threads: 12/22 Procs: 7/11 Idle-procs: 0/0
smtp/*/0 Threads: 10/24 Procs: 4/11 Idle-procs: 0/0
local/*/0 Threads: 1/1 Procs: 0/0 Idle-procs: 0/0
smtp/*funet.fi/0 Threads: 1/2 Procs: 0/0 Idle-procs: 0/0
smtp/*.fi/0 Threads: 12/22 Procs: 8/11 Idle-procs: 0/0
If you look at the source (threads.c: thread_report() )
you will see what those numbers are. I don't like those
disparities, however things do run... (dir: scheduler-new)
In the source directory there are also some pictures,
and other explanations of how things work -- if you
can understand my notes on it. Ah yes, THE NEW SCHEDULER
IS INCOMPATIBLE WITH OLD SCHEDULER CONFIGURATION FILE!
In the source directory there is also my current
"scheduler-new.conf" -file for you to try your hands at.
See file CONFIGURING in the scheduler-new/ -directory
for additional comments and explanations.
When you tune your scheduler configuration, I suggest
you to consider a bit longer "idlemax=" -times, than my
sample has -- I had to test the idle cleanup code, after
all... (times of 5-30 minutes range are ok)
One of the gremlings gripeing at me is the way a failing
to connect SMTP queue wastes time on retries, but I will
leave that to a bit latter time. (I have now some 240
messages in such a state.. Most of them to a single site
with circa a dozen machines.)
Keep the reports coming in, you may encounter something
I haven't -- a different way to cause coredump, for example..
/Matti Aarnio <mea@nic.funet.fi>