wasmer: Failed to instantiate a deserialized Module compiled with the Dylib engine
Describe the bug
Instance::new panics when the module is deserialized from FileSystemCache and engine is native.
wasmer-2.0.0 | rustc 1.52.1 (9bc8c42bb 2021-05-09) | x86_64
Steps to reproduce
Test wasm:
$ cat main.rs
fn main() {}
$ cargo build --release --target wasm32-unknown-unknown
Reproducer:
use wasmer_cache::Cache;
let path = test_root_path();
let wasm_bin = '...';
// Write a test app file
let compiler_config = wasmer::LLVM::default();
let engine = wasmer::Dylib::new(compiler_config).engine();
let store = wasmer::Store::new(&engine);
let import_object = wasmer::imports! {};
let hash = wasmer_cache::Hash::generate(&wasm_bin);
let module = wasmer::Module::new(&store, wasm_bin).unwrap();
// Instantiate from compiled
drop(wasmer::Instance::new(&module, &import_object).unwrap());
// Instantiate from cache
let mut cache = wasmer_cache::FileSystemCache::new(&path).unwrap();
cache.store(hash, &module).unwrap();
let module = unsafe { cache.load(&store, hash) }.unwrap();
drop(wasmer::Instance::new(&module, &import_object).unwrap());
Expected behavior
I would expect the Instance::new to succeed (or return an error).
Actual behavior
thread 'tokio-runtime-worker' panicked at 'assertion failed: prev.start > max', /Users/vavrusa/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmer-engine-2.0.0/src/trap/frame_info.rs:232:9
stack backtrace:
0: rust_begin_unwind
at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:493:5
1: core::panicking::panic_fmt
at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/panicking.rs:92:14
2: core::panicking::panic
at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/panicking.rs:50:5
3: wasmer_engine::trap::frame_info::register
at /Users/vavrusa/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmer-engine-2.0.0/src/trap/frame_info.rs:232:9
4: <wasmer_engine_dylib::artifact::DylibArtifact as wasmer_engine::artifact::Artifact>::register_frame_info
at /Users/vavrusa/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmer-engine-dylib-2.0.0/src/artifact.rs:725:17
5: wasmer_engine::artifact::Artifact::instantiate
at /Users/vavrusa/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmer-engine-2.0.0/src/artifact.rs:137:9
6: wasmer::module::Module::instantiate
at /Users/vavrusa/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmer-2.0.0/src/module.rs:267:35
7: wasmer::instance::Instance::new
at /Users/vavrusa/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmer-2.0.0/src/instance.rs:117:22
Additional context
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 18 (17 by maintainers)
Commits related to this issue
- Fix drop order for Module fields The field ordering here is actually significant because of the drop order: we want to drop the artifact before dropping the engine. The reason for this is that dropp... — committed to wasmerio/wasmer by Amanieu 2 years ago
- Merge #2805 #2806 2805: Enable `experimental-io-devices` by default r=Amanieu a=Amanieu Fixes #2695 2806: Fix drop order for Module fields r=Amanieu a=Amanieu The field ordering here is actually s... — committed to wasmerio/wasmer by bors[bot] 2 years ago
- Fix drop order for Module fields The field ordering here is actually significant because of the drop order: we want to drop the artifact before dropping the engine. The reason for this is that dropp... — committed to terra-money/wasmer by Amanieu 2 years ago
- Fix drop order for Module fields The field ordering here is actually significant because of the drop order: we want to drop the artifact before dropping the engine. The reason for this is that dropp... — committed to wasmerio/wasmer by Amanieu 2 years ago
Small update: we had a follow-up PR #2812 that was just merged into
master, that solves the issue when running multiple instances in parallel. So this ticket should now be fully fixed.Please let us know if this issue repeats in the future so we can reopen and re-investigate the issue
@YunSuk-Yeo Thanks for the reproducer, this was a huge help in tracking down this bug. I have a potential fix in #2806.