deeplearning4j: SparkDl4jMultiLayer: LimitedContextPool - Can't allocate new context OR cudaStreamSynchronize failed

I’m creating this issue as per @raver119’s request

Problem

I’m experiencing a strange issue when using the org.deeplearning4j.spark.impl.multilayer.SparkDl4jMultiLayer I’m training on a spark cluster with 8 slaves (all GPU machines with each 4 GPUs – type Nvidia Tesla p40 with 24gb each). Training and testing works fine till epoch 30. In epoch 31 training goes fine, but when testing starts (using org.deeplearning4j.spark.impl.multilayer.SparkDl4jMultiLayer.doEvaluation(org.apache.spark.api.java.JavaRDD<org.nd4j.linalg.dataset.DataSet>, int, int, T...)) the whole process freezes (it doesn’t crash, it simply hangs, without any failing spark tasks) The logs of all the slaves keep printing this message over and over:

29 Mar 2019;09:11:28.442 [Executor task launch worker for task 19812] WARN  o.n.j.a.c.impl.LimitedContextPool - Can't allocate new context, sleeping...

The spark-submit command I’m using to launch the training is as follows (I’m hiding the accountName and accountKeyfor obvious reasons:

nohup /opt/spark-2.3.2-bin-hadoop2.7/bin/spark-submit --master spark://12.0.0.4:7077 --class org.daanvdn.sandbox.spark.SparkTrainerCli --driver-memory 400G --executor-memory 400G --executor-cores 4 --conf '-Dorg.bytedeco.javacpp.maxbytes=96G -Djava.io.tmpdir=/mnt/tmp' --conf 'spark.driver.extraJavaOptions=-Dorg.bytedeco.javacpp.maxbytes=96G -Djava.io.tmpdir=/mnt/tmp' /home/my-host/sparkjars/dl4j-azure-spark-1.0.0-SNAPSHOT-bin.jar -epochs 100 -batch 128 -avg 2 -prefetch 1 -container some-container -trainFolder train -testFolder test -evalMode ALL_EPOCHS -rddTrainingApproach Export -train true -evalNumWorkers 1 -accountName #### -accountKey #### | tee nohup.out

As far as spark-configuration is concerned, my setup is as follows:

  1. spark-defaults.conf: has the following properties:
    spark.core.connection.ack.wait.timeout 600s
    spark.executor.heartbeatInterval 300
    spark.network.timeout 301
    spark.worker.cleanup.enabled true
    spark.worker.cleanup.interval 600
    spark.storage.cleanupFilesAfterExecutorExit      true
  1. spark-env.sh has SPARK_WORKER_INSTANCES=1 so I’m actually only using 1 of the GPUs per machine.

Even after waiting for a couple of hours, the slaves keep hanging, so there is no other option than to kill the process. When I rerun the experiment, the same happens over and over: the application always freezes in the evaluation phase of epoch 31.

I’ve uploaded the code I’m running to GitHub. The main class is https://github.com/daanvdn/dl4j-azure-minimal-example/blob/master/dl4j-azure-spark/src/main/java/org/daanvdn/sandbox/spark/SparkTrainerCli.java. The MultiLayerConfiguration is added as a java resource

In an attempt to fix this freezing problem, I played around with some spark settings, most importantly SPARK_WORKER_INSTANCES and executor-cores. I have tried changing the value of SPARK_WORKER_INSTANCES to 4 and set executor-cores also to 4 so that each slave uses 4 GPUs instead of 1. This needs to happen anyway, because there is no reason not to use all GPUs of the slaves.

When applying these changes the original problem of freezing goes away. However, this in turn causes the application to crash (rather than hang). An error message that is always printed in this situation is cudaStreamSynchronize(...) failed as below:

2019-04-02 10:06:49 ERROR SparkUncaughtExceptionHandler:91 - Uncaught exception in thread Thread[ADSI prefetch thread,5,main] java.lang.RuntimeException: java.lang.RuntimeException: Error loading DataSet at org.deeplearning4j.datasets.iterator.AsyncDataSetIterator$AsyncPrefetchThread.run(AsyncDataSetIterator.java:445) Caused by: java.lang.RuntimeException: Error loading DataSet at org.nd4j.linalg.dataset.DataSet.load(DataSet.java:275) at org.deeplearning4j.api.loader.impl.SerializedDataSetLoader.load(SerializedDataSetLoader.java:36) at org.deeplearning4j.api.loader.impl.SerializedDataSetLoader.load(SerializedDataSetLoader.java:31) at org.deeplearning4j.spark.impl.common.LoadDataSetFunction.call(LoadDataSetFunction.java:43) at org.deeplearning4j.spark.impl.common.LoadDataSetFunction.call(LoadDataSetFunction.java:34) at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31) at org.deeplearning4j.datasets.iterator.IteratorDataSetIterator.next(IteratorDataSetIterator.java:77) at org.deeplearning4j.datasets.iterator.IteratorDataSetIterator.next(IteratorDataSetIterator.java:62) at org.deeplearning4j.datasets.iterator.IteratorDataSetIterator.next(IteratorDataSetIterator.java:36) at org.deeplearning4j.datasets.iterator.AsyncDataSetIterator$AsyncPrefetchThread.run(AsyncDataSetIterator.java:419) Suppressed: java.lang.RuntimeException: cudaStreamSynchronize(...) failed at org.nd4j.nativeblas.Nd4jCuda$NativeOps.streamSynchronize(Native Method) at org.nd4j.linalg.jcublas.context.CudaContext.syncSpecialStream(CudaContext.java:112) at org.nd4j.linalg.jcublas.ops.executioner.CudaGridExecutioner.flushQueueBlocking(CudaGridExecutioner.java:969) at org.nd4j.linalg.jcublas.ops.executioner.CudaGridExecutioner.commit(CudaGridExecutioner.java:1097) at org.nd4j.linalg.memory.abstracts.Nd4jWorkspace.close(Nd4jWorkspace.java:600) at org.deeplearning4j.datasets.iterator.AsyncDataSetIterator$AsyncPrefetchThread.run(AsyncDataSetIterator.java:423)

This cudaStreamSynchronize(...) failed error message is always preceded by one of two errors/warnings:

  1. it can be preceded by a warning ConvolutionLayer:347 - CuDNN execution failed - falling back on built-in implementation as below (cf. link):
2019-04-02 10:06:49 WARN  ConvolutionLayer:347 - CuDNN execution failed - falling back on built-in implementation
java.lang.RuntimeException: cudaStreamSynchronize(...) failed
	at org.nd4j.nativeblas.Nd4jCuda$NativeOps.streamSynchronize(Native Method)
	at org.nd4j.jita.allocator.pointers.cuda.cudaStream_t.synchronize(cudaStream_t.java:38)
	at org.nd4j.jita.handler.impl.CudaZeroHandler.alloc(CudaZeroHandler.java:331)
	at org.nd4j.jita.allocator.impl.AtomicAllocator.allocateMemory(AtomicAllocator.java:488)
	at org.nd4j.jita.allocator.impl.AtomicAllocator.allocateMemory(AtomicAllocator.java:416)
	at org.nd4j.linalg.jcublas.buffer.BaseCudaDataBuffer.<init>(BaseCudaDataBuffer.java:218)
	at org.nd4j.linalg.jcublas.buffer.BaseCudaDataBuffer.<init>(BaseCudaDataBuffer.java:312)
	at org.nd4j.linalg.jcublas.buffer.CudaLongDataBuffer.<init>(CudaLongDataBuffer.java:133)
	at org.nd4j.linalg.jcublas.buffer.factory.CudaDataBufferFactory.createLong(CudaDataBufferFactory.java:831)
	at org.nd4j.linalg.jcublas.buffer.factory.CudaDataBufferFactory.createLong(CudaDataBufferFactory.java:826)
	at org.nd4j.linalg.factory.Nd4j.createBufferDetached(Nd4j.java:1462)
	at org.nd4j.linalg.api.shape.Shape.createShapeInformation(Shape.java:3223)
	at org.nd4j.linalg.api.ndarray.BaseShapeInfoProvider.createShapeInformation(BaseShapeInfoProvider.java:86)
	at org.nd4j.jita.constant.ProtectedCudaShapeInfoProvider.createShapeInformation(ProtectedCudaShapeInfoProvider.java:86)
	at org.nd4j.jita.constant.ProtectedCudaShapeInfoProvider.createShapeInformation(ProtectedCudaShapeInfoProvider.java:69)
	at org.nd4j.linalg.jcublas.CachedShapeInfoProvider.createShapeInformation(CachedShapeInfoProvider.java:47)
	at org.nd4j.linalg.api.ndarray.BaseNDArray.<init>(BaseNDArray.java:175)
	at org.nd4j.linalg.api.ndarray.BaseNDArray.<init>(BaseNDArray.java:285)
	at org.nd4j.linalg.jcublas.JCublasNDArray.<init>(JCublasNDArray.java:120)
	at org.nd4j.linalg.jcublas.JCublasNDArrayFactory.createUninitialized(JCublasNDArrayFactory.java:165)
	at org.nd4j.linalg.factory.Nd4j.createUninitialized(Nd4j.java:4442)
	at org.nd4j.linalg.workspace.BaseWorkspaceMgr.createUninitialized(BaseWorkspaceMgr.java:288)
	at org.nd4j.linalg.workspace.BaseWorkspaceMgr.createUninitialized(BaseWorkspaceMgr.java:276)
	at org.deeplearning4j.nn.layers.convolution.CudnnConvolutionHelper.preOutput(CudnnConvolutionHelper.java:364)
	at org.deeplearning4j.nn.layers.convolution.ConvolutionLayer.preOutput(ConvolutionLayer.java:342)
	at org.deeplearning4j.nn.layers.convolution.ConvolutionLayer.activate(ConvolutionLayer.java:411)
	at org.deeplearning4j.nn.layers.AbstractLayer.activate(AbstractLayer.java:259)
	at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.outputOfLayerDetached(MultiLayerNetwork.java:1211)
	at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.doEvaluationHelper(MultiLayerNetwork.java:3304)
	at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.doEvaluation(MultiLayerNetwork.java:3256)
	at org.deeplearning4j.spark.impl.evaluation.EvaluationRunner.doEval(EvaluationRunner.java:189)
	at org.deeplearning4j.spark.impl.evaluation.EvaluationRunner.execute(EvaluationRunner.java:149)
	at org.deeplearning4j.spark.impl.multilayer.evaluation.IEvaluateFlatMapFunctionAdapter.call(IEvaluateFlatMapFunction.java:89)
	at org.deeplearning4j.spark.impl.multilayer.evaluation.IEvaluateFlatMapFunctionAdapter.call(IEvaluateFlatMapFunction.java:56)
	at org.datavec.spark.transform.BaseFlatMapFunctionAdaptee.call(BaseFlatMapFunctionAdaptee.java:40)
	at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
	at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.scheduler.Task.run(Task.scala:109)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2019-04-02 10:06:49 ERROR Executor:91 - Exception in task 59.0 in stage 21.0 (TID 1175)
java.util.concurrent.ExecutionException: java.lang.RuntimeException: cudaStreamSynchronize(...) failed
	at org.deeplearning4j.spark.impl.evaluation.EvaluationRunner$EvaluationFuture.get(EvaluationRunner.java:242)
	at org.deeplearning4j.spark.impl.evaluation.EvaluationRunner$EvaluationFuture.get(EvaluationRunner.java:214)
	at org.deeplearning4j.spark.impl.multilayer.evaluation.IEvaluateFlatMapFunctionAdapter.call(IEvaluateFlatMapFunction.java:92)
	at org.deeplearning4j.spark.impl.multilayer.evaluation.IEvaluateFlatMapFunctionAdapter.call(IEvaluateFlatMapFunction.java:56)
	at org.datavec.spark.transform.BaseFlatMapFunctionAdaptee.call(BaseFlatMapFunctionAdaptee.java:40)
	at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
	at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.scheduler.Task.run(Task.scala:109)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: cudaStreamSynchronize(...) failed
	at org.nd4j.nativeblas.Nd4jCuda$NativeOps.streamSynchronize(Native Method)
	at org.nd4j.linalg.jcublas.context.CudaContext.syncSpecialStream(CudaContext.java:112)
	at org.nd4j.linalg.jcublas.ops.executioner.CudaGridExecutioner.flushQueueBlocking(CudaGridExecutioner.java:969)
	at org.nd4j.linalg.jcublas.ops.executioner.CudaGridExecutioner.commit(CudaGridExecutioner.java:1097)
	at org.nd4j.linalg.dataset.DataSet.<init>(DataSet.java:117)
	at org.nd4j.linalg.dataset.DataSet.<init>(DataSet.java:100)
	at org.nd4j.linalg.dataset.DataSet.<init>(DataSet.java:73)
	at org.deeplearning4j.datasets.iterator.AsyncDataSetIterator.<init>(AsyncDataSetIterator.java:54)
	at org.deeplearning4j.datasets.iterator.AsyncDataSetIterator.<init>(AsyncDataSetIterator.java:114)
	at org.deeplearning4j.datasets.iterator.AsyncDataSetIterator.<init>(AsyncDataSetIterator.java:109)
	at org.deeplearning4j.datasets.iterator.AsyncDataSetIterator.<init>(AsyncDataSetIterator.java:94)
	at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.doEvaluationHelper(MultiLayerNetwork.java:3268)
	at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.doEvaluation(MultiLayerNetwork.java:3256)
	at org.deeplearning4j.spark.impl.evaluation.EvaluationRunner.doEval(EvaluationRunner.java:189)
	at org.deeplearning4j.spark.impl.evaluation.EvaluationRunner.execute(EvaluationRunner.java:162)
	at org.deeplearning4j.spark.impl.multilayer.evaluation.IEvaluateFlatMapFunctionAdapter.call(IEvaluateFlatMapFunction.java:89)
	... 22 more
  1. it can also be preceded by the error message Out of [DEVICE] memory, host memory will be used instead like here (cf link):
2019-04-02 10:06:49 WARN  CudaZeroHandler:339 - Out of [DEVICE] memory, host memory will be used instead: deviceId: [3], requested bytes: [4]
2019-04-02 10:06:49 ERROR SparkUncaughtExceptionHandler:91 - Uncaught exception in thread Thread[UniGC thread 3,5,main]
java.lang.RuntimeException: cudaEventSynchronize(...) failed
	at org.nd4j.nativeblas.Nd4jCuda$NativeOps.eventSynchronize(Native Method)
	at org.nd4j.jita.allocator.pointers.cuda.cudaEvent_t.synchronize(cudaEvent_t.java:69)
	at org.nd4j.jita.flow.impl.SynchronousFlowController.waitTillReleased(SynchronousFlowController.java:234)
	at org.nd4j.jita.flow.impl.GridFlowController.waitTillReleased(GridFlowController.java:78)
	at org.nd4j.jita.allocator.impl.AtomicAllocator$UnifiedGarbageCollectorThread.run(AtomicAllocator.java:716)
2019-04-02 10:06:49 ERROR SparkUncaughtExceptionHandler:91 - Uncaught exception in thread Thread[UniGC thread 2,5,main]
java.lang.RuntimeException: cudaEventSynchronize(...) failed
	at org.nd4j.nativeblas.Nd4jCuda$NativeOps.eventSynchronize(Native Method)
	at org.nd4j.jita.allocator.pointers.cuda.cudaEvent_t.synchronize(cudaEvent_t.java:69)
	at org.nd4j.jita.flow.impl.SynchronousFlowController.waitTillFinished(SynchronousFlowController.java:134)
	at org.nd4j.jita.flow.impl.GridFlowController.waitTillFinished(GridFlowController.java:63)
	at org.nd4j.jita.flow.impl.SynchronousFlowController.waitTillReleased(SynchronousFlowController.java:231)
	at org.nd4j.jita.flow.impl.GridFlowController.waitTillReleased(GridFlowController.java:78)
	at org.nd4j.jita.allocator.impl.AtomicAllocator$UnifiedGarbageCollectorThread.run(AtomicAllocator.java:716)
2019-04-02 10:06:49 ERROR SparkUncaughtExceptionHandler:91 - Uncaught exception in thread Thread[UniGC thread 1,5,main]
java.lang.RuntimeException: cudaEventSynchronize(...) failed
	at org.nd4j.nativeblas.Nd4jCuda$NativeOps.eventSynchronize(Native Method)
	at org.nd4j.jita.allocator.pointers.cuda.cudaEvent_t.synchronize(cudaEvent_t.java:69)
	at org.nd4j.jita.flow.impl.SynchronousFlowController.waitTillReleased(SynchronousFlowController.java:234)
	at org.nd4j.jita.flow.impl.GridFlowController.waitTillReleased(GridFlowController.java:78)
	at org.nd4j.jita.allocator.impl.AtomicAllocator$UnifiedGarbageCollectorThread.run(AtomicAllocator.java:716)

Regarding this Out of [DEVICE] memory error, I am puzzled. I don’t understand why the GPU would go OOM: when I monitor the GPU memory usage with nvidia-smi it never goes above 1gb per GPU. Even though every GPU has 24gb, the GPU still crashes. I’ve made a screen recording of the nvidia-smi monitoring for one of the slaves: . From minute 4 onwards you can see that even though it’s using about 1gb of memory, the process associated with the GPUs crashes and nvidi-smi reports no running process found. After a couple of seconds a new process is spawned and gets associated with the GPUs. This new process runs until about 9m 50s and the same happens again: GPU memory usage is about 1gb, the process crashes, a new process is created and attached to the GPUs and so on. I’ve also played around with the --executor-memory and tried values between 100gb and 400gb but this has no influence.

As a final important remark it should be noted that when I run the same code but with -evalMode set to NEVER (keeping SPARK_WORKER_INSTANCES=4 and --executor-cores 4 ) no crashing or freezing is happening. In other words: if I only call org.deeplearning4j.spark.impl.multilayer.SparkDl4jMultiLayer.fit(java.lang.String) there is no problem. As soon as I start to do evaluation by calling SparkDl4jMultiLayer#doEvaluation, the thing crashes. With regard to the evaluation, I also tried to play around with the setting of evalNumWorkers (using any value from 1 to 4) but this has no effect.

As for the data that I’m training on, this is hosted on Azure in a Block Blob Storage container. I’ve added two zipped folders to this git repo with a sample of my train and test data.

Version information

  • Ubuntu 16.04.6 LTS

  • cuda version: nvcc: NVIDIA ® Cuda compiler driver Copyright © 2005-2018 NVIDIA Corporation Built on Tue_Jun_12_23:07:04_CDT_2018 Cuda compilation tools, release 9.2, V9.2.148

  • dl4j version: 1.0.0-beta3

  • dl4j.spark.version: 1.0.0-beta3_spark_2

  • java version: 1.8

  • spark binaries: spark-2.3.2-bin-hadoop2.7

  • nvidia driver version: 418.40.04

Hardware information

  • cluster of 8 slave machines, 1 driver machine
  • every machine is hosted in Azure and has:
    • 4x Tesla P40 (24gb) GPUs
    • 24 vcpus
    • 448 GB host memory

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Comments: 42 (18 by maintainers)

Commits related to this issue

Most upvoted comments

Nope, i still suspect they might be related. Let me implement workaround for first issue, and we’ll see where we are with second issue after that.