auto-sklearn: Segmentation fault

when try the sample code:

import autosklearn.classification
import sklearn.model_selection
import sklearn.datasets
import sklearn.metrics
X, y = sklearn.datasets.load_digits(return_X_y=True)
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, random_state=1)
automl = autosklearn.classification.AutoSklearnClassifier()
automl.fit(X_train, y_train)
y_hat = automl.predict(X_test)
print("Accuracy score", sklearn.metrics.accuracy_score(y_test, y_hat))

something wrong with the automl.fit line, and got “Segmentation fault”, the python command console exit.

>automl.fit(X_train, y_train)
/home/work/.pyenv/versions/py3_env/lib/python3.6/site-packages/autosklearn/evaluation/train_evaluator.py:197: RuntimeWarning: Mean of empty slice
  Y_train_pred = np.nanmean(Y_train_pred_full, axis=0)
[WARNING] [2019-05-29 16:26:32,672:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-29 16:26:32,680:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
Segmentation fault
(py3_env) [work@*** ~]$ [WARNING] [2019-05-29 16:26:34,685:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-29 16:26:36,689:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!

does anyone come across this problem?

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 8
  • Comments: 28 (9 by maintainers)

Most upvoted comments

I also encounter this problem, and unfortunately, the above solution seems doesn’t work for me.

Had the same issues on archlinux, been up- an downgrading pyrfr, swig, etc. whatsoever without success. Finally a working procedure as a combination of the above answers (not sure if first pacman/ln step is required):

pacman -S swig3
ln -s /usr/bin/swig-3 /usr/bin/swig

conda create -n test python=3.6 gxx_linux-64 gcc_linux-64 swig=3 && conda activate test

curl https://raw.githubusercontent.com/automl/auto-sklearn/master/requirements.txt | xargs -n 1 -L 1 pip install
pip install auto-sklearn

It seems to work, thx~

  1. install anaconda3, wget https://repo.anaconda.com/archive/Anaconda3-2019.03-Linux-x86_64.sh and run it
  2. conda activate base, and conda install gxx_linux-64 gcc_linux-64 swig
(base) [work@** anaconda3]$ which python
~/anaconda3/bin/python
(base) [work@** anaconda3]$ python
Python 3.7.3 (default, Mar 27 2019, 22:11:17) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 
  1. pip install auto-sklearn
  2. run the sample code, and about one hour later got the result 0.9911

os info:

LSB Version:    :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description:    CentOS Linux release 7.2.1511 (Core) 
Release:        7.2.1511
Codename:       Core

python version:

Python 3.6.6 (default, Oct 24 2018, 14:44:15) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 

installation: a)make compile and install swig-4.0.0.tar.gz b)pip install auto-sklearn

pip list:

Package                       Version 
----------------------------- --------
alabaster                     0.7.12  
auto-sklearn                  0.5.2   
Babel                         2.7.0   
certifi                       2019.3.9
chardet                       3.0.4   
ConfigSpace                   0.4.10  
Cython                        0.29.9  
docutils                      0.14    
idna                          2.8     
imagesize                     1.1.0   
Jinja2                        2.10.1  
joblib                        0.13.2  
liac-arff                     2.4.0   
lockfile                      0.12.2  
MarkupSafe                    1.1.1   
nose                          1.3.7   
numpy                         1.16.4  
packaging                     19.0    
pandas                        0.24.2  
pip                           10.0.1  
psutil                        5.6.2   
Pygments                      2.4.2   
pynisher                      0.5.0   
pyparsing                     2.4.0   
pyrfr                         0.7.4   
python-dateutil               2.8.0   
pytz                          2019.1  
PyYAML                        5.1     
requests                      2.22.0  
scikit-learn                  0.19.2  
scipy                         1.3.0   
setuptools                    39.0.1  
six                           1.12.0  
smac                          0.8.0   
snowballstemmer               1.2.1   
Sphinx                        2.0.1   
sphinx-rtd-theme              0.4.3   
sphinxcontrib-applehelp       1.0.1   
sphinxcontrib-devhelp         1.0.1   
sphinxcontrib-htmlhelp        1.0.2   
sphinxcontrib-jsmath          1.0.1   
sphinxcontrib-qthelp          1.0.2   
sphinxcontrib-serializinghtml 1.1.3   
typing                        3.6.6   
urllib3                       1.25.3  
xgboost                       0.90   

test.py:

import autosklearn.classification
import sklearn.model_selection
import sklearn.datasets
import sklearn.metrics
X, y = sklearn.datasets.load_digits(return_X_y=True)
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, random_state=1)
automl = autosklearn.classification.AutoSklearnClassifier()
automl.fit(X_train, y_train)
y_hat = automl.predict(X_test)
print("Accuracy score", sklearn.metrics.accuracy_score(y_test, y_hat))

nohup python test.py & nohup file cotennt:

/home/work/.pyenv/versions/py3_env/lib/python3.6/site-packages/sklearn/ensemble/weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.
  from numpy.core.umath_tests import inner1d
/home/work/.pyenv/versions/py3_env/lib/python3.6/site-packages/autosklearn/evaluation/train_evaluator.py:197: RuntimeWarning: Mean of empty slice
  Y_train_pred = np.nanmean(Y_train_pred_full, axis=0)
[WARNING] [2019-05-30 11:11:50,584:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:11:50,592:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:11:52,596:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:11:54,600:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:11:56,604:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:11:58,608:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:00,612:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:02,616:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:04,620:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:06,624:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:08,628:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:10,631:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:12,635:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:14,639:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:16,643:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:18,647:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:20,651:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:22,654:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:24,658:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:26,662:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:28,666:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:30,670:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:32,673:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:34,677:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:36,681:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:38,685:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:40,689:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:42,692:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:44,696:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:46,700:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:48,704:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:50,707:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:52,711:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:54,715:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:56,718:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:12:58,722:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:13:00,727:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:13:02,730:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:13:04,734:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:13:06,738:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!
[WARNING] [2019-05-30 11:13:08,741:EnsembleBuilder(1):d74860caaa557f473ce23908ff7ba369] No models better than random - using Dummy Score!

Thanks for letting us know. I just added a push to dockerhub to our docker building github action as docker hub is used more often.

@mfeurer I also have the “Segmentation fault” problem , and it remains unfixed after I install the newest anaconda3 and downgrade the version of swig to 3.0.12.

Thank you very reporting a solution to this issue.

Could you please do me a favor and check the swig version installed with conda? I suspect that this is an issue of the recent SWIG4.