spring-integration: ZookeeperLockRegistry does not honor timeout semantics on tryLock

I was trying to write a test that checked the behavior of ZookeeperLockRegistry when the zookeeper cluster is unreachable. I wrote a test that extended the pattern in DefaultLockRegistryTests to ensure the lock is not able to be locked when the testingserver is shutdown. Here’s a snippet of the code:

testingServer.stop();
Lock lock2 = registry.obtain("bar");
assertFalse("Should not have been able to lock with zookeeper server stopped!", lock2.tryLock(2, TimeUnit.SECONDS));

The call to lock2.tryLock(2, TimeUnit.SECONDS)) hangs forever until I kill the test. This is the call in curator framework that seems to cause the problem: https://github.com/apache/curator/blob/apache-curator-2.10.0/curator-recipes/src/main/java/org/apache/curator/framework/recipes/locks/StandardLockInternalsDriver.java#L54

These seems to occur regardless of the CuratorFramework client configuration that is passed into the registry. I do see the background reconnect process timingout and eventually giving up (properly according to my settings), but the call linked to above seems to run in the foreground and not honor the retry/timeout settings.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 15 (8 by maintainers)

Commits related to this issue

Most upvoted comments

Thank you for your help @cah-oster !

The only problem with ZK that there is no any hooks to check the real connection state. We can know that there is a problem only when dispatch the next command to ZK. And if we do that in our main Thread we wait for all the re-connection and recovery “hell”. 😢

Looking forward for more valuable feedback from you or anyone else in the community.

@garyrussell , now it’s your turn to take a look into PR 😄

@cah-oster , well, even if that isn’t our problem, using such an API is our responsibility. IMO we should provide some workaround on the matter. Please, see https://jira.spring.io/browse/INT-4087 and let’s continue discussion there.