scipy: Highs solver did not provide a solution nor did it report a failure

My issue is about scipy.optimize.linprog(c, A_ub=A_ub, b_ub=b_ub, A_eq=A_eq, b_eq=b_eq, method='highs') telling me to submit a bug report.

Reproducing code example:

See minimal reproducer in a comment https://github.com/scipy/scipy/issues/13610#issuecomment-785620302

Error message:

/home/chn/env/lib/python3.8/site-packages/scipy/optimize/_linprog_highs.py:316: OptimizeWarning: model_status is not optimal, using scaled_model_status instead.
  res = _highs_wrapper(c, A.indptr, A.indices, A.data, lhs, rhs,
The solver did not provide a solution nor did it report a failure. Please submit a bug report.

Scipy/Numpy/Python version information:

1.6.1 1.18.3 sys.version_info(major=3, minor=8, micro=5, releaselevel='final', serial=0)

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 26 (16 by maintainers)

Commits related to this issue

Most upvoted comments

The best I can do tonight is this simpler reproducer – I’ll be able to dig in more tomorrow. The trouble seems to be from the size of A_ub. It breaks for me when A_ub is (130, 73) or larger (when n > 64 in the reproducer below).

Note: I can get higher with highs-ipm, the trouble is with highs-ds I believe. Again – I’ll have more time to look at this tomorrow. In the meantime try switching to highs-ipm and see what happens

'''Simple LAD regression.'''

import numpy as np
from scipy.optimize import linprog
from scipy.sparse import lil_matrix, csc_matrix

if __name__ == '__main__':
    # Training data is {(x0, y0), (x1, y2), ..., (xn-1, yn-1)}
    #     x \in R^d
    #     y \in R
    #
    # n: number of training samples
    # d: dimension of x, i.e. x \in R^d
    # phi: feature map R^d -> R^m
    # m: dimension of feature space

    # size of features and feature space
    m, d = 8, 9

    # number of samples (this is where the trouble is!)
    # 64: works
    # 65: reproduces the strange behavior
    n = 64

    # any feature mapping will do -- why not a random one
    np.random.seed(0) # reproducibility
    phi = np.random.normal(0, 1, size=(m, d))

    # we are omnipotent, so we get to know what the truth is
    w_true = np.random.randn(m)

    # any features will do
    x = np.random.normal(0, 1, size=(d, n))

    # corresponding measurements -- with a little spice
    y = w_true @ (phi @ x) + np.random.normal(0, 1e-5, size=n)

    # construct the problem
    c = np.ones(m+n)
    c[:m] = 0
    A_ub = lil_matrix((2*n, n+m))
    idx = 0
    for ii in range(n):
        A_ub[idx, :m] = phi @ x[:, ii]
        A_ub[idx, m+ii] = -1
        A_ub[idx+1, :m] = -1*phi @ x[:, ii]
        A_ub[idx+1, m+ii] = -1
        idx += 2
    A_ub = A_ub.tocsc()
    b_ub = np.zeros(2*n)
    b_ub[0::2] = y
    b_ub[1::2] = -y
    bnds = [(None, None)]*m + [(0, None)]*n

    # solve
    res = linprog(c, A_ub=A_ub, b_ub=b_ub, bounds=bnds, method='highs')  # use 'highs-ipm' to avoid issues -- seems to work!

    # did strange things happen?
    if res.x is None and res.success:
        raise ValueError('Strange things are the foot')

    # check
    assert np.allclose(res.x[:m], w_true, atol=1e-5)
    print('We did it!')