tensorflow: OS X segfault on import

OS X 10.11.2, with CUDA:

/Developer/NVIDIA/CUDA-7.5/lib/libcudadevrt.a
/Developer/NVIDIA/CUDA-7.5/lib/libcudart.7.5.dylib
/Developer/NVIDIA/CUDA-7.5/lib/libcudart.dylib -> libcudart.7.5.dylib
/Developer/NVIDIA/CUDA-7.5/lib/libcudart_static.a
/Developer/NVIDIA/CUDA-7.5/lib/libcudnn.5.dylib
/Developer/NVIDIA/CUDA-7.5/lib/libcudnn.dylib -> libcudnn.5.dylib
/Developer/NVIDIA/CUDA-7.5/lib/libcudnn_static.a

Tensorflow built according to https://medium.com/@fabmilo/how-to-compile-tensorflow-with-cuda-support-on-osx-fd27108e27e1#.v8ibv617m, main difference being that CUDA toolkit was installed from NVidia installer instead of via brew cask install cuda, and using homebrew Python 3.5 instead of Anaconda Python.

In other words:

  1. Install CUDA toolkit.
  2. Download cudnn-7.5-osx-x64-v5.0-rc.tgz and move files to /Developer/NVIDIA/CUDA-7.5/{include,lib}
  3. Install bazel 0.2.1 via brew.
  4. Create Python 3.5 virtualenv, install numpy 1.11 into it so tensorflow can build against it(?).
  5. Clone tensorflow repo.
  6. Build with:
PYTHON_BIN_PATH="/Users/pikeas/.virtualenvs/hnn/bin/python" CUDA_TOOLKIT_PATH="/Developer/NVIDIA/CUDA-7.5" CUDNN_INSTALL_PATH="/Developer/NVIDIA/CUDA-7.5" TF_UNOFFICIAL_SETTING=1 TF_NEED_CUDA=1 TF_CUDA_COMPUTE_CAPABILITIES="3.0" TF_CUDNN_VERSION="5" TF_CUDA_VERSION="7.5" TF_CUDA_VERSION_TOOLKIT=7.5 ./configure
bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
bazel-bin/tensorflow/tools/pip_package/build_pip_package
  1. export DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-7.5/lib
  2. Install built tensorflow-0.8.0-py3-none-any.whl into virtualenv.
  3. import tensorflow fails with:
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.7.5.dylib locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.5.dylib locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.7.5.dylib locally
[1]    78583 segmentation fault  python

I’ve tried removing scipy per the recent similar Linux issue, which didn’t help.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 19 (7 by maintainers)

Commits related to this issue

Most upvoted comments

It also worked for me running on a MacBook Pro (Retina, 15-inch, Late 2013) + Tensorflow r0.11 + CUDA 8.0 + cuDNN v8 + Anaconda3

I just used the command: sudo ln -s /usr/local/cuda/lib/libcuda.dylib /usr/local/cuda/lib/libcuda.1.dylib

and now it is working no more Segmentation Fault:11.

I experienced a very similar issue with prebuilt TensorFlow 0.10 binary and CUDA installed according to instructions.

It turns out TensorFlow wants to import libcuda.1.dylib, not the libcuda.dylib that NVIDIA’s CUDA installer installed. Manually creating a new symbolic link from libcuda.dylib to libcuda.1.dylib in /usr/local/cuda/lib fixed the issue for me.

Solved!

Gist problem - dtruss sometimes truncated its output. When I re-ran, I got a slightly longer trace that mentioned libcuda.dylib. This file is not in /Developer/NVIDIA/CUDA-7.5/lib, but it is in /usr/local/cuda/lib.

In other words, the solution is adding to my dylib export: export DYLD_LIBRARY_PATH="/Developer/NVIDIA/CUDA-7.5/lib:/usr/local/cuda/lib"

Please note that I used stock everything: CUDA from NVIDIA, Python from homebrew, numpy from pip, tensorflow from source. As far as I can tell, anyone building under Mac OS X El Capitan, and very likely Yosemite/Mavericks as well, will experience the same problem.

I strongly urge the project to create and maintain until OS X build instructions.