Bad performance with large job history
jan.bronee at borealisgroup.com
jan.bronee at borealisgroup.com
Wed Dec 15 07:02:40 PST 2004
Hi,
We are using CUPS 1.22 on linux as a central spool server for our ERP System. It is working perfect (thanks), except for one small detail: The spooling and query performance degrades when many control files resides in /var/spool/cups/c* . When the job is spooled to the server, the actual printing is fast enough.
To work-a-round this, we delete the control-files for every completed print-job during midnight and restarts the CUPS daemon.
But still, during the afternoon (apr. 4000 jobs printed), the spooling performance is getting very bad.
To view the symphtom, lpq -a can be used.
In the morning:
lylnx004:/var/log/cups # time lpq -a
real 0m0.846s <--- :-)
user 0m0.060s
sys 0m0.090s
In the afternoon
lylnx004:/var/log/cups # time lpq -a
real 0m9.946s <--- :-(
user 0m0.060s
sys 0m0.090s
lylnx004:/var/log/cups # find /var/spool/cups/c* | wc -l
4279
lylnx004:/var/log/cups # strace -c lpq -a
execve("/usr/bin/lpq", ["lpq", "-a"], [/* 57 vars */]) = 0
Rank Owner Job File(s) Total Size
active MZSUTEE 65891 000UVsTx.P50 2048 bytes
active STAGRER 68094 000fLWRF.P50 6144 bytes
1st STAGRER 68103 000fMcdN.P50 6144 bytes
2nd STAGRER 68111 000fNAjR.P50 6144 bytes
active STOLSSE 68844 000i3GGv.P50 531456 bytes
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
99.09 9.214627 924 9977 recv
0.84 0.077994 8 10026 time
0.05 0.004209 9 461 brk
0.01 0.000655 17 39 send
0.00 0.000444 12 37 read
0.00 0.000331 10 33 13 open
0.00 0.000202 9 23 close
0.00 0.000185 8 23 old_mmap
0.00 0.000147 49 3 connect
0.00 0.000132 17 8 write
0.00 0.000128 6 21 fstat64
0.00 0.000123 10 12 munmap
0.00 0.000095 9 11 mmap2
0.00 0.000081 8 10 mprotect
0.00 0.000051 6 8 fcntl64
0.00 0.000045 15 3 socket
0.00 0.000030 8 4 setsockopt
0.00 0.000020 7 3 gettimeofday
0.00 0.000017 9 2 rt_sigaction
0.00 0.000014 14 1 access
0.00 0.000008 8 1 uname
0.00 0.000007 7 1 1 stat64
0.00 0.000005 5 1 getpid
------ ----------- ----------- --------- --------- ----------------
100.00 9.299550 20708 14 total
lylnx005:/var/log/cups # strace -c lpstat -v LY01
execve("/usr/bin/lpstat", ["lpstat", "-v", "LY01"], [/* 56 vars */]) = 0
device for LY01: socket://ly0e03bw.mignetwork.net.:9100/
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
97.64 13.360278 783 17055 select
1.69 0.231649 14 17068 recv
0.62 0.085399 5 17114 time
0.02 0.003294 8 433 brk
0.00 0.000523 13 41 read
0.00 0.000498 13 39 send
0.00 0.000325 10 34 13 open
0.00 0.000267 53 5 connect
0.00 0.000209 9 24 close
0.00 0.000182 8 24 old_mmap
0.00 0.000134 10 14 munmap
0.00 0.000128 6 22 fstat64
0.00 0.000090 8 12 mmap2
0.00 0.000080 8 10 mprotect
0.00 0.000080 16 5 socket
0.00 0.000074 5 14 fcntl64
0.00 0.000049 6 8 setsockopt
0.00 0.000047 16 3 write
0.00 0.000026 7 4 rt_sigaction
0.00 0.000022 4 5 gettimeofday
0.00 0.000008 8 1 uname
0.00 0.000006 6 1 1 stat64
0.00 0.000002 2 1 getpid
------ ----------- ----------- --------- --------- ----------------
100.00 13.683370 51937 14 total
This is not new to 1.22, we had this problem from 1.15 (where we started to use CUPS)
Any good ideas to solve this?
Cheers
Jan
More information about the cups
mailing list