[Raw Msg Headers][Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Qmail or Zmailer



> We are looking to build a high performance mail-cluster for
> pop3, imap4 and smtp. The system uses an LDAP database to validate
> and deliver the users mail.
> 
> The system needs to scale to around 500,000 users - the questions I have:
> 
> 1. Which is better for this Qmail or ZMailer

	I use Zmailer in my work to do this type of thing.

> 2. What would be the best way of creating a high availability service -
>    multiple POP3, multiple SMTP, etc pointing to common NFS mount?

	Running message store on NFS is -- awkward...
	There are lots of issues with attribute cacheing (must not do),
	and that NFS storage being the single-point-of-failure then.

	I would do:

	port-redirector(cisco thing) -> pop/imap -proxy -> real mailboxes

	That way if one of the  pop/imap proxies  dies, customers still
	get service from others.  If one of the message store boxes dies,
	then only that subset of customers won't get service.

> 3. Has anyone found any major issues with Zmailer which stop it scaling.

	Yes, and no.    We used (for traditional reasons) "/etc/passwd"
	file, and it definitely scales better with ZMailer than with
	sendmail.  (It was FLAT file, no db version of it!)

	Since then we wrote a wrapper library to do   getpwnam()  calls
	from our backend database (not LDAP per se, but that can be done
	as well) and the system scaled way better.

	One other issue is that traditional "UNIX-mailbox" format isn't
	quite nice for *large* message store files.  Also, in UNIXes
	the mailbox spool area tends to be *flat* directory, which
	scales truly badly.

	In ZMailer's ``mailbox'' program there are options:
		-D[D]
		-P[P]
	which alleviate this problem by deriving mailbox SUBDIRECTORY
	from username.  See the man-page for more explanations.

	In most UNIXes having less than about 230 files in the directory
	(files, or subdirs, doesn't matter) the performance is way better,
	than with more things there...

> Thanks
> Eden

/Matti Aarnio <mea@nic.funet.fi>