or-tools: CP-Sat: Python 3.7.2, ortools 7.0 wrong optimization result

My test-cases fail on 3.7.2 with version 7.0.6546. It says that it found an optimal solution of 32, but it should be 16. Using python 3.6.7 with version 7.0.6546 it works like expected.

I also get some deprecation warnings:

/usr/local/lib/python3.7/site-packages/google/protobuf/descriptor.py:47
  /usr/local/lib/python3.7/site-packages/google/protobuf/descriptor.py:47: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
    from google.protobuf.pyext import _message

/usr/local/lib/python3.7/site-packages/google/protobuf/internal/well_known_types.py:788
  /usr/local/lib/python3.7/site-packages/google/protobuf/internal/well_known_types.py:788: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
    collections.MutableMapping.register(Struct)

-- Docs: https://docs.pytest.org/en/latest/warnings.html

I added the test cases after upgrading ortools from 6.10 to 7.0. So I don’t know if it worked before. Does someone have experienced this behavior?

Python 3.6.8 says that the optimum is 30, it differs but still wrong.

What I do

Put testcase in different docker images (in brackets the calculated optimum):

  • python:3.7-slim fails (32) is python 3.7.1
  • python:3.6-slim fails (30) is python 3.6.8
  • python:3.6.7 fails (32) Those are all debian packages but I am on ubuntu and the testcase works.

Weird that the result differs from 3.6.7, 3.6.8, 3.7.1. Maybe some other dependency differs, I will try to find a working docker image and a minimal working example. At the moment I can’t share the code.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 17

Commits related to this issue

Most upvoted comments

This is fixed on master.

Thanks again for the bug report.

I did manage to replicate it. I have a problem in protobuf format that returns 2 in sequential mode and 4 in MT mode. Now, we need to debug it.

Thanks for the report!

I just cloned the or-tools repository and checked out the master branch. This is the last commit:

commit ee89733a600fd2d424df1392c85f557c2bc2a045 (HEAD -> master, origin/master)
Author: Laurent Perron <lperron@google.com>
Date:   Mon Apr 1 13:27:21 2019 +0200

    change CP-SAT internals

Then I compiled it from sources and installed it, resulting in this version:

Finished processing dependencies for ortools==7.0.6571

My OS/Python/or-tools versions:

$ lsb_release -d
Description:	Ubuntu 18.04.2 LTS

$ python3 --version
Python 3.6.7

$  python3 -c "import ortools; print(ortools.__version__)"
7.0.6571

Futher, I am using the modified example of @lperron but readded the test_one/two/... test cases. See here fore the full code: https://gist.github.com/syxolk/f185b8c1f2ea689d969b6190a5b42f45

I executed it for like 10 times, all looks good (shortened output):

$ pytest minimal_example.py

cpsat/minimal_example.py ..............
============== 14 passed in 1.47 seconds =================

But once in a while (can be after 10 to 20 runs), I get failed test cases like this (shortened output):

$ pytest minimal_example.py

cpsat/minimal_example.py ........F.....
n_d = 2, force_solution = False, print_to_file = None

sometimes other test cases fail:

n_d = 1, force_solution = False, print_to_file = None

In fact, I threw together a short bash script that runs the test cases 100-times:

#!/bin/bash
FAILS=0
for i in {1..100}; do
    if ! pytest minimal_example.py; then
        ((FAILS++))
    fi
done

echo $FAILS

And I got: 11, which means that 11 out of 100 test runs failed.

Thank you, works 👍

More info, the problem happens in single thread when using the core algorithms with linearization disabled. parameters.optimize_with_core = True parameters.linearization_level = 0 parameters.num_search_workers = 1

Fix is coming.