Printing delay

Kurt Pfeifle k1pfeifle at gmx.net
Tue Jul 3 01:43:33 PDT 2007


angelb wrote:
>>> All clients are setup to browsepoll the servers. And no NFS holds.
>> How many clients altogether ?!
>>
>> BrowsePoll may be putting some heavy load on a server, if the number of 
>> clients is more than a couple of handfull. At least in CUPS 1.1.x that was 
>> certainly the case (dunno what some of the new improvements in 1.2.x may have 
>> achieved here). Of course, this also depends on BrowseInterval settings on 
>> the respective clients...
>>
>> Certainly, with "browse push" from the server side (that is, "Browsing On" 
>> + "BrowseAddress @LOCAL") you should be able to take away some of the server 
>> load.
>>
>> If your network setup does not allow for Browsing by all clients, you 
>> maybe can use the same effect by combining BrowsePoll by a few clients with 
>> BrowseRelay?
>>
>>> cuspd.conf(servers):
>>>
>>> ServerName server1
>>> LogLevel debug
>> LogLevel debug increases the work for the server compared to LogLevel info....
>>
>>> AccessLog /var/log/cups/access_log
>>> ErrorLog /var/log/cups/error_log
>>> PageLog /var/log/page_log
>>> MaxLogSize 20M
>>> MaxJobs 2000
>> With more than 2000 printqueues your MaxJobs 2000 setting is already less 
>> than 1 job per queue on avarage.
>>
>> What is you typical concurrent jobs figure ("lpstat -o"), and what is happing at 
>> peak printing hours?
>>
>> [To find out, you could do something like running:
>>
>>   while true; do \
>>               echo -n "$(date):  " ; \
>>               lpstat -o \
>>               | wc -l \
>>               | tee -a /tmp/concurrent_cupsjobs.log ; \
>>               sleep 300 ; \
>>   done
>>
>> for 24 hours, which gives you a line of a jobload snapshot every 5 minutes...]
>>
>> MaxJobs is also the setting that influences what is kept in the job history. So if 
>> you typically don't have more than 100 concurrent jobs at any one time (including 
>> peak times), you may be able to bring down the strain to the server by using 
>> MaxJobs 200.
>>
>>> MaxPrinterHistory 20
>> I don't know this setting; first time I hear about it (if it exists).
>>
>> I only know MaxClients, MaxClientsPerHost, MaxJobs, MaxJobsPerPrinter and 
>> MaxJobsPerUser.
>>
>> Did you mean to use MaxJobsPerPrinter?
>>
>>>            #   on username/password.
>>> User lp
>>> Group cups
>>> MaxClients 2048
>>> MaxClientsPerHost 1024
>>> RIPCache 512m
>> AFAIK, RIPCache is *per filter process*; but I'm not really sure now. Maybe 
>> someone who is can correct me if I'm wrong?
>>
>> Are you *really* having such large jobs that require 512 MB of RIPCache? 
>> (it should not do much harm though, because normally CUPS should utilize
>> this maximum only if it needs it for any single filter process...)
>>
>>> SystemGroup lp
>>> Listen 631
>>> Browsing On
>>> BrowsePort 631
>>> BrowseProtocols cups
>>> BrowseInterval 300
>>> BrowseTimeout 14400
>> Oh, so you have "browse push" as well as BrowsePoll by the clients??
>>
>> If your server does a degree of processing jobs and queries originating 
>> from localhost, you may be able to bring down respective response times
>> by adding a unix domain socket for local IPP communictions (simply add
>> a line "Listen /a/path/to/a/file.sock").
>>
>> ----
>>
>> That said, I don't think it is all that bad to get 2600+ printers listed 
>> by the CUPS server within 10 seconds by running a query from a remote
>> client....
>>
>>
>>
> 
> Thanks for your comments, I appreciate it.

I doubt that appreciation now; see below

> The major concern I have are the client systems affected by backlogs of
> printjobs because CUPS is taking longer to process a request now.
> 
> When we had about 1600 printer queues, when I submit a job from the
> shell using lpr or lp, it takes about 4-5 seconds. Now that we have
> 2600, it's taking 10-13 seconds...it's certainly wasn't a concern then
> but it is now.
> 
> For example, CUPS is taking 11-seconds to process a 184-bytes file. If
> you have an app-server that's busy, you can see how fast it is to get
> into backlog. When I say busy, a client always have a print job every
> second(very easy to do with 2600+ printers).
> 
>> timex lp -dricohc test.txt
> 
> real 11.28
> user 2.30
> sys  0.05
> 
> test.txt is, which is 184 bytes :
>   #####  ######   ####    #####
>     #    #       #          #
>     #    #####    ####      #
>     #    #            #     #
>     #    #       #    #     #
>     #    ######   ####      #
> 
> 
> Is there anything I tune in the client or servers configuration files
> to help alleviate the delay?
> 
> 
> Thanks,
> Angel
> 

Hmmm....

  ...you do not use quote markup to differentiate my comments and your
     response
  ...you do not respond to my direct questions;

it is very difficult to communicate efficiently this way....

So please, once more (some of them):

   * How many clients altogether?
   * What is you typical concurrent jobs figure ("lpstat -o"), and what
     is happing at peak printing hours?
   * Did you ever consider (and test) what I nicknamed "browse push"?

I do not think that the "11 seconds processing time" for a "184 Bytes job"
will be much different from a "18.4 KBytes job". The reason is that most
likely, your 184 Bytes are spending 10.9 seconds of waiting time before
cupsd is even responding. Waiting time that cupsd probably needs to
process all the "BrowsePoll" requests (but that's only speculation, be-
cause you did not answer to my question about total number of polling
clients, their polling frequency and why you are mixing BrowsePoll with
"browse push"....)

Your 11 seconds very likely are not "processing time" (after which the
job is completed), but "response time" (after which the job is fully
submitted).

You can test some of my statements above easily, and find out how your
184 Bytes of text brings *more* load on the server than 18441 Bytes of
PostScript (see size of /usr/share/cups/data/testprint.ps): because the
text file causes texttops+pstops filters to run, where the PS job only
requires 1 filter, pstops.... Yet, (I bet,) the PS job will not take
considerably longer to be accepted than your ASCII text job.

-- 
Kurt Pfeifle
System & Network Printing Consultant ---- Linux/Unix/Windows/Samba/CUPS
Infotec Deutschland GmbH  .....................  Hedelfinger Strasse 58
A RICOH Company  ...........................  D-70327 Stuttgart/Germany




More information about the cups mailing list