clipper: Vague oserror on deploy_python_closure

Thanks for building clipper! It’s a very compelling idea. When I call a model inside of a function, the function that deploy_python_closure references fails and provides a vague OSError: Invalid Argument I tried adding the scipy, numpy and sci-kit learn to the closure based on this issue: and I get the same error.

Can someone please provide a hint as to what the error is referencing?

I was hoping to access the model via closure capture like in this example but it’s not working for me.

Here’s a snippet of code

clipper_conn = ClipperConnection(DockerContainerManager())
clipper_conn.start_clipper()
clipper_conn.register_application(name="hello-world",
                                    input_type="ints",
                                    default_output="-1.0",
                                    slo_micros=100000000)

suggester = nb.ArticleSuggester()
suggester.build_article_solution_table(title_solution_fname='title_map.csv')
model = RandomForestClassifier(n_estimators=10)
suggester.train_model(model, 'test_model.joblib')

def feature_sum(xs):
    title_map='title_map.csv'
    issue_counts_fpath='../../tests/issue_counts.csv'
    firmware = inputs[0]
    issue_articles = inputs[1:]
    session_article_vector = np.zeros((1, model.n_features_ - 1))
    for article_id in issue_articles:
        session_article_vector[0][article_id] = 1
    sample_vector = np.concatenate([session_article_vector, np.array([[firmware]])], axis=1)
    probas = model.predict_proba(sample_vector)

    return [str(sum(x)) for x in xs]

try:
    deploy_python_closure(clipper_conn, version="v1-3", name="suggest-example",input_type="integers",func=feature_sum)
    clipper_conn.link_model_to_app(app_name="hello-world", model_name="suggest-example")

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 17

Most upvoted comments

I was able to deploy the closure with a model that is only 1.3G instead of 4G. The reduced size does affect my model’s performance, unfortunately. I am not running the code in a notebook. I am running the code in a python script.

You are right, I think there needs to be a patch for bigger models if you choose to support them.