Performance with large printer database
pspinler at yahoo.com
Fri Jan 28 10:23:12 PST 2005
Michael Sweet wrote:
> Patrick Spinler wrote:
>> Please advise me. What, if anything, can I do?
> First, unless your servers are on a different network, it will be
> MUCH more efficient (read: less network traffic, faster client
> responses) if you don't use BrowsePoll on the clients, and instead
> just use BrowseAddress on the servers.
Unfortunately, our clients are spread across a number (more than a
dozen) of different subnetworks. :-(
> Second, tune the BrowseInterval and BrowseTimeout parameters; the
> defaults are usable up to about 100 printers (which yields 3-4
> printer updates per second). If you have the defaults set on the
> client, then they will be processing 800 printer updates per
> second, which *will* bog the systems down.
I significantly up'ed these parameters (to 1 hour and 4 hours
respectively. Unfortunately, I still still unacceptably slow
performance on the clients.
Within 10 seconds of restarting a test client, the client cupsd quickly
maxes out the cpu, and becomes unresponsive to basic requests like
'lpstat -a'. Issuing such a command will hang for hours. In fact, I've
never seen it complete.
Reducing the client from 4 to 1 browsepoll statements still has the same
effect, it's just more delayed. After a restart, cupsd takes
approximately 3 minutes to reach a 100% cpu consumption and become
At a cursory glance, this appears to be some kind of a significant
I'd offer to perform an strace/truss log for you, but I fear the disk
space required. Is there a debugging build or some other client level
debugging or testing I could do for you?
> Third, be prepared to need a lot of RAM on the clients; 4 servers
> with 6000 printers apiece will yield 30000 printers on each client
> if you have implicit classes enabled, which will need ~60MB of memory
> to track them (the current overhead is ~2k per printer...)
> You can split the printers up (e.g. so that each server handles half
> of the available printers) to realize similar redundancy and load-
> balancing but cut the memory, CPU, and network overhead on the
More information about the cups