tower: Panicked at 'missing cancelation' in tower-ready-cache
I just had my code hit this .expect in tower-ready-cache. It’s using tower-balance::pool (and thus also tower-balance::p2c). Not sure what caused it, but figured maybe @olix0r can tease it out from the backtrace?
thread 'tokio-runtime-worker' panicked at 'missing cancelation', /home/ubuntu/.cargo/registry/src/github.com-1ecc6299db9ec823/tower-ready-cache-0.3.0/src/cache.rs:236:37
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:77
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:59
4: core::fmt::write
at src/libcore/fmt/mod.rs:1052
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1428
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:62
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:204
9: std::panicking::default_hook
at src/libstd/panicking.rs:224
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:470
11: rust_begin_unwind
at src/libstd/panicking.rs:378
12: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
13: core::option::expect_failed
at src/libcore/option.rs:1203
14: core::option::Option<T>::expect
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libcore/option.rs:347
15: tower_ready_cache::cache::ReadyCache<K,S,Req>::poll_pending
at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tower-ready-cache-0.3.0/src/cache.rs:236
16: tower_balance::p2c::service::Balance<D,Req>::promote_pending_to_ready
at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tower-balance-0.3.0/src/p2c/service.rs:151
17: <tower_balance::p2c::service::Balance<D,Req> as tower_service::Service<Req>>::poll_ready
at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tower-balance-0.3.0/src/p2c/service.rs:238
18: <tower_balance::pool::Pool<MS,Target,Req> as tower_service::Service<Req>>::poll_ready
at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tower-balance-0.3.0/src/pool/mod.rs:373
19: <tower_buffer::worker::Worker<T,Request> as core::future::future::Future>::poll
at ./.cargo/registry/src/github.com-1ecc6299db9ec823/tower-buffer-0.3.0/src/worker.rs:169
20: tokio::task::core::Core<T>::poll
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/core.rs:128
21: tokio::task::harness::Harness<T,S>::poll::{{closure}}::{{closure}}
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/harness.rs:119
22: core::ops::function::FnOnce::call_once
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libcore/ops/function.rs:232
23: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/panic.rs:318
24: std::panicking::try::do_call
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/panicking.rs:303
25: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:86
26: std::panicking::try
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/panicking.rs:281
27: std::panic::catch_unwind
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/panic.rs:394
28: tokio::task::harness::Harness<T,S>::poll::{{closure}}
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/harness.rs:100
29: tokio::loom::std::causal_cell::CausalCell<T>::with_mut
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/loom/std/causal_cell.rs:41
30: tokio::task::harness::Harness<T,S>::poll
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/harness.rs:99
31: tokio::task::raw::RawTask::poll
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/raw.rs:113
32: tokio::task::Task<S>::run
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/mod.rs:381
33: tokio::runtime::thread_pool::worker::GenerationGuard::run_task
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/thread_pool/worker.rs:459
34: tokio::runtime::thread_pool::worker::GenerationGuard::process_available_work
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/thread_pool/worker.rs:317
35: tokio::runtime::thread_pool::worker::GenerationGuard::run
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/thread_pool/worker.rs:282
36: tokio::runtime::thread_pool::worker::Worker::run::{{closure}}::{{closure}}
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/thread_pool/worker.rs:160
37: std::thread::local::LocalKey<T>::try_with
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/thread/local.rs:262
38: std::thread::local::LocalKey<T>::with
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/thread/local.rs:239
39: tokio::runtime::thread_pool::worker::Worker::run::{{closure}}
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/thread_pool/worker.rs:136
40: tokio::runtime::thread_pool::current::set::{{closure}}
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/thread_pool/current.rs:47
41: std::thread::local::LocalKey<T>::try_with
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/thread/local.rs:262
42: std::thread::local::LocalKey<T>::with
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/thread/local.rs:239
43: tokio::runtime::thread_pool::current::set
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/thread_pool/current.rs:29
44: tokio::runtime::thread_pool::worker::Worker::run
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/thread_pool/worker.rs:132
45: tokio::runtime::thread_pool::Workers::spawn::{{closure}}::{{closure}}
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/thread_pool/mod.rs:113
46: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/blocking/task.rs:30
47: tokio::task::core::Core<T>::poll
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/core.rs:128
48: tokio::task::harness::Harness<T,S>::poll::{{closure}}::{{closure}}
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/harness.rs:119
49: core::ops::function::FnOnce::call_once
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libcore/ops/function.rs:232
50: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/panic.rs:318
51: std::panicking::try::do_call
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/panicking.rs:303
52: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:86
53: std::panicking::try
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/panicking.rs:281
54: std::panic::catch_unwind
at /rustc/fc23a81831d5b41510d3261c20c34dd8d32f0f31/src/libstd/panic.rs:394
55: tokio::task::harness::Harness<T,S>::poll::{{closure}}
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/harness.rs:100
56: tokio::loom::std::causal_cell::CausalCell<T>::with_mut
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/loom/std/causal_cell.rs:41
57: tokio::task::harness::Harness<T,S>::poll
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/harness.rs:99
58: tokio::task::raw::RawTask::poll
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/raw.rs:113
59: tokio::task::Task<S>::run
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/task/mod.rs:381
60: tokio::runtime::blocking::pool::run_task
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/blocking/pool.rs:311
61: tokio::runtime::blocking::pool::Inner::run
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/blocking/pool.rs:230
62: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}::{{closure}}
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/blocking/pool.rs:210
63: tokio::runtime::context::enter
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/context.rs:72
64: tokio::runtime::handle::Handle::enter
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/handle.rs:34
65: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
at ./.cargo/git/checkouts/tokio-377c595163f99a10/3fb217f/tokio/src/runtime/blocking/pool.rs:209
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 31
Commits related to this issue
- balance: Add a stress test for p2c The hope for this was to reproduce #415 (which it does not sadly), but at least it adds a test for p2c! — committed to tower-rs/tower by jonhoo 4 years ago
- balance: Add a stress test for p2c The hope for this was to reproduce #415 (which it does not sadly), but at least it adds a test for p2c! — committed to tower-rs/tower by jonhoo 4 years ago
- ready-cache: Avoid panic on strange race It's been observed that occasionally tower-ready-cache would panic trying to find an already canceled service in `cancel_pending_txs` (#415). The source of th... — committed to tower-rs/tower by jonhoo 4 years ago
- ready-cache: Avoid panic on strange race (#420) It's been observed that occasionally tower-ready-cache would panic trying to find an already canceled service in `cancel_pending_txs` (#415). The sou... — committed to tower-rs/tower by jonhoo 4 years ago
- ready-cache: Add endpoint-level debugging linkerd/linkerd2#6086 describes an issue that sounds closely related to tower-rs/tower#415: There's some sort of consistency issue between the ready-cache's ... — committed to olix0r/tower by olix0r 3 years ago
- ready-cache: Add endpoint-level debugging linkerd/linkerd2#6086 describes an issue that sounds closely related to tower-rs/tower#415: There's some sort of consistency issue between the ready-cache's ... — committed to olix0r/tower by olix0r 3 years ago
- add test reproducing #415 Signed-off-by: Eliza Weisman <eliza@buoyant.io> — committed to tower-rs/tower by hawkw 2 years ago
- ready-cache: Ensure cancelation updates can be observed `tokio::task` enforces a cooperative scheduling regime that can cause `oneshot::Receiver::poll` to return pending after the sender has sent an ... — committed to tower-rs/tower by olix0r 2 years ago
- ready-cache: Ensure cancelation can be observed (#668) `tokio::task` enforces a cooperative scheduling regime that can cause `oneshot::Receiver::poll` to return pending after the sender has sent an ... — committed to tower-rs/tower by olix0r 2 years ago
- ready-cache: Ensure cancelation can be observed (#668) `tokio::task` enforces a cooperative scheduling regime that can cause `oneshot::Receiver::poll` to return pending after the sender has sent an ... — committed to tower-rs/tower by olix0r 2 years ago
- ready-cache: Ensure cancelation can be observed (#668) `tokio::task` enforces a cooperative scheduling regime that can cause `oneshot::Receiver::poll` to return pending after the sender has sent an ... — committed to tower-rs/tower by olix0r 2 years ago
- chore: prepare to release tower v0.4.13 # 0.4.13 (June 17, 2022) ### Added - **load_shed**: Public constructor for `Overloaded` error ([#661]) ### Fixed - **util**: Fix hang with `call_all` when ... — committed to tower-rs/tower by hawkw 2 years ago
- chore: prepare to release tower v0.4.13 (#672) # 0.4.13 (June 17, 2022) ### Added - **load_shed**: Public constructor for `Overloaded` error ([#661]) ### Fixed - **util**: Fix hang with `... — committed to tower-rs/tower by hawkw 2 years ago
- update Tower to 0.4.13 to fix load balancer panic Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally in Tower... — committed to linkerd/linkerd2-proxy by hawkw 2 years ago
- update Tower to 0.4.13 to fix load balancer panic (#1758) Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally... — committed to linkerd/linkerd2-proxy by hawkw 2 years ago
- update Tower to 0.4.13 to fix load balancer panic (#1758) Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally ... — committed to linkerd/linkerd2-proxy by hawkw 2 years ago
- update Tower to 0.4.13 to fix load balancer panic (#1758) Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally ... — committed to linkerd/linkerd2-proxy by hawkw 2 years ago
- update Tower to 0.4.13 to fix load balancer panic (#1758) Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally ... — committed to linkerd/linkerd2-proxy by hawkw 2 years ago
- update Tower to 0.4.13 to fix load balancer panic (#1758) Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally ... — committed to linkerd/linkerd2-proxy by hawkw 2 years ago
Yeah, I have looked at the
FuturesUnorderedimpl myself before, and also do not believe it does any buffering. The receiver notification is the one I’m also currently suspicious of, though it would be crazy if asendon aoneshotwould not be seen by a receiver polled by the same future. There’s currently some discussion about this on Discord. Alice suggested:Though that execution seems insane. It would mean that writing to a field through
&mut selfbefore yielding, and then reading that same field after yielding, could have the load not see the store.This is, indeed, a very weird error. I have to admit ignorance to the newer std::future APIs. I would personally want to look at the translation from futures 0.1 to see if there’s anything that changed semantically. Or perhaps there’s a notification race that we haven’t seen before. I’ll refresh myself and see if I can come up with a better theory.