What happened?
My scenario is spark on kubernetes, The driver business logic has already completed, which then triggers the driver Pod to exit. the pod exit times out(we set deletionGracePeriodSeconds: 30 ), so that driver pod exit code is 143. and the Spark Operator check non-zero exit as Failed, so spark application is marked as Failed.
However, in normal, exit code of 143 for a pod means that the container/process was normally terminated due to receiving a SIGTERM, rather than crashing because of an application error. Therefore, can we handle the driver Pod exit code 143 as a DriverStateCompleted status ?
sparkapplication.go
