[cups] Why do filter failures stop jobs instead of aborting them?

hw hw at gc-24.de
Fri Aug 11 11:13:20 PDT 2017


Bryan Mason wrote:
> Hello,
>
> Please consider the following situation that is occurring at one of my
> customers' sites (and reproduced in my test lab):
>
>   * A "two-tier" server configuration (local CUPS service "client"
>     sending jobs to a remote CUPS print "server".
>
>   * A filter failure on the master print server causes the job to
>     fail.  The job is put into "processing-stopped" state.  This
>     allows additional jobs to be processed in the queue on the master
>     print server.  However . . .
>
>   * The IPP backend on the client never exits because the job isn't
>     canceled, aborted, or completed on the server.  As a result, the
>     queue on the client is stuck -- no more jobs can be processed on
>     the client even though the job on the server is able to accept new
>     jobs.
>
> This leads me to the following questions:
>
>  a) Why are jobs "stopped" instead of "aborted" when a filter fails?
>
>  b) Depending on the answer to the above would it be possible to
>     "abort" the job (set the state to IPP_JOB_ABORTED) when a filter
>     failure occurs?  This would allow the client IPP backend to exit
>     and the queue to become un-blocked.
>
> The definition of the "processing-stopped" state in RFC 2199 says that
> jobs in the "stopped" state will resume processing as soon as the
> reasons for the stoppage are no longer present.  This seems to imply
> that "processing-stopped" is a temporary state that will correct
> itself.

Any state can only be considered as temporary after it has been changed.

When a printer is out of paper or experiences a paper jam, it´s in a state
that doesn´t correct itself.

Not all print jobs can be resubmitted.

> Most of the filter failures that I've seen are caused by a
> missing file (hpijs/foomatic/etc.), by a bug in one of the filters, or
> by a malformed input file (bad PDF, etc.).  All of these require some
> sort of corrective action and you need to resubmit the print job after
> the problem is corrected.
>
> To me, setting the state to "aborted" when a filter fails seems to make
> more sense.  I admit that I'm probably looking at this through a pretty
> narrow lens right now, so I understand if there are broader reasons for
> not doing this.

If the job was aborted, the user at the client might figure that they should
resubmit the job, which would be rather pointless but consume resources.

The problem is more with the server processing additional jobs after the filter
has failed.  Having that said, there shouldn´t be an inconsistency as you
describe between the server --- processing jobs --- and the client --- not
processing jobs.

Why aren´t the jobs queued on the clients until the problem at the server has
been resolved?

>
> Thanks in advance for any help you can provide.
>
> Bryan Mason
> Senior Software Maintenance Engineer
> Red Hat, Inc.
>
> _______________________________________________
> cups mailing list
> cups at cups.org
> https://lists.cups.org/mailman/listinfo/cups
>



More information about the cups mailing list