To avoid concurrency issues (e.g. manageprocess updating a job at the same time), the admin can set a flag on the job (similar to how keep_all_data operates as a flag to control the manageprocess command's behavior). When the flag is present, the command will run a new cancel method on the TaskManager instances, and then mark the job as ended. This should only operate on incomplete jobs, and it should run before creating new jobs. (In the admin, the new flag can be disabled for complete jobs.)
The cancel method would be a no-op for all except Collect, since those others use RabbitMQ. For Collect, it would cancel the job in Scrapyd (taking the code out of wipe).
The wipe method should call cancel, since the intention when wiping is never to leave the task running. (This also avoids having to remember to call cancel from every place wipe is called.)
As part of this issue, make the status field uneditable on the Job and Task admins. Update the docs that refer to set.*Status. Mention that cancelling a job also cancels the Scrapyd job.
Update the docs that refer to this issue (#352) and surrounding text.
Add relevant tests first #128
To avoid concurrency issues (e.g. manageprocess updating a job at the same time), the admin can set a flag on the job (similar to how
keep_all_dataoperates as a flag to control themanageprocesscommand's behavior). When the flag is present, the command will run a newcancelmethod on the TaskManager instances, and then mark the job as ended. This should only operate on incomplete jobs, and it should run before creating new jobs. (In the admin, the new flag can be disabled for complete jobs.)The
cancelmethod would be a no-op for all except Collect, since those others use RabbitMQ. For Collect, it would cancel the job in Scrapyd (taking the code out ofwipe).The
wipemethod should callcancel, since the intention when wiping is never to leave the task running. (This also avoids having to remember to call cancel from every placewipeis called.)As part of this issue, make the
statusfield uneditable on the Job and Task admins. Update the docs that refer toset.*Status. Mention that cancelling a job also cancels the Scrapyd job.Update the docs that refer to this issue (
#352) and surrounding text.Add relevant tests first #128