geopyspark driver deployements typically have limit on max number of parallel jobs, and going over that limit might cause something like:
ERROR openeo.extra.job_management: 400 ConcurrentJobLimit: Job was not started because concurrent job limit (10) is reached.
the client could handle this more intelligently in some situations
e.g. back off a bit (instead of hard failing) when creating/starting new jobs in the job manager loop
Note however that ConcurrentJobLimit is not an official error code (yet) in openEO API, so pushing for that might be part of the work here.
related to
geopyspark driver deployements typically have limit on max number of parallel jobs, and going over that limit might cause something like:
the client could handle this more intelligently in some situations
e.g. back off a bit (instead of hard failing) when creating/starting new jobs in the job manager loop
Note however that
ConcurrentJobLimitis not an official error code (yet) in openEO API, so pushing for that might be part of the work here.related to