Failed Spark Job
Jump to navigation
Jump to search
A Failed Spark Job is a Spark job that is a failed job.
- See: Successful Spark Job.
References
2017
- https://www.indix.com/blog/engineering/lessons-from-using-spark-to-process-large-amounts-of-data-part-i/
- QUOTE: Spark gives a considerable boost in performance owing to keeping intermediate files in memory, but at the same time you might find yourself dealing with failing jobs due to insufficient memory. We burned a few fingers in the process, but we learned from our mistakes, and this series is an attempt to consolidate all our learning, so that you can avoid running into the same pitfalls.
- Issue: Your application runs out of heap space on the executors.
... - Issue: Your application runs out of heap space on the driver node.
... - Issue: Cluster runs out of disk space.
... - Issue: Application takes too long to complete or is indefinitely stuck and does not show progress.
...
- Issue: Your application runs out of heap space on the executors.
- QUOTE: Spark gives a considerable boost in performance owing to keeping intermediate files in memory, but at the same time you might find yourself dealing with failing jobs due to insufficient memory. We burned a few fingers in the process, but we learned from our mistakes, and this series is an attempt to consolidate all our learning, so that you can avoid running into the same pitfalls.