compression: Problem when executing eagerly

Hello!

I’m trying to use some of your layers, but I see they’re not fully compatible with eager execution (even though I see some if’s to treat cases of eager execution in your code).

One example is the self.add_loss in the entropy bottleneck layer.

I’ve started reading the keras code, and I’ve seen this commentary:

Note that add_loss is not supported when executing eagerly. Instead, variable regularizers may be added through add_variable. Activity regularization is not supported directly (but such losses may be returned from Layer.call()).

I don’t know if simply using the add_variable instead would fix this problem.

I have not tried any solution yet. I’ve just wanted to share the problem with the code maintainer.

Maybe anybody seeing this can point out a solution before I do.

; )

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 1
  • Comments: 38

Most upvoted comments

Hi nguerinjr,

sorry for the delayed response. The entropy models in entropy_models.py are not intended to be used in TF 2.0. We have new implementations in python/entropy_models. Please take a look at them! There is one example model that uses them already here. We will convert the other models in time.

Hello @jonycgn. Sorry for the number of messages. I’ve been reporting everything I’ve found.

I have analyzed each of the issues and I have understood what’s the deal with TF 2.0. I have implemented modifications and now I have a working version of the framework in all the cases.

If you want, I can make a pull request so you can see which changes I’ve made.

As of today, we have a beta release out that supports TF2 and eager mode.

Hey, I just wanted to let you know that we’re aware of the changes that will be introduced with TF 2.0, including eager mode.

Unfortunately, it’s not quite clear yet what the changes will look like exactly, and because of what I said earlier, it’s likely the changes to our code in order to make eager mode work wouldn’t be trivial. So, until TF 2.0 gets closer to becoming a reality (likely in the next few months), we’re probably not going to be able to invest a lot of time in making it work. Once we have more details on TF 2.0, we’ll definitely look at it again!

If you’ve found a workaround in the mean time, please share it here though. If it doesn’t involve changing the design of our code fundamentally, we’d be happy to include it. I’ll keep this bug open for later reference.