avalanche: Possible Replay bug

Hi all,

I’m working on implementing Class-Balanced and Experience-balanced Replay (See #479). I want to provide the opportunity for the user to select a fixed capacity prior or adaptive to all seen experiences so far.

However, when looking at the Replay code I find it hard to follow:

single_task_mem_size = min(self.mem_size, len(curr_data))
h = single_task_mem_size // (strategy.training_exp_counter + 1)

remaining_example = single_task_mem_size % (
    strategy.training_exp_counter + 1)
# We recover it using the random_split method and getting rid of the
# second split.
rm_add, _ = random_split(
    curr_data, [h, len(curr_data) - h]
)

In the first line: https://github.com/ContinualAI/avalanche/blob/9cf3c53d83ffc1b3dfe4400d43356ebcd901cfc9/avalanche/training/plugins/replay.py#L64 What if len(curr_data) is smaller than self.mem_size? It would occur to me that ‘h’ is not correct anymore. Even though the current data length len(curr_data) may not equal the full memory size, it can still have capacity for a mem_size/n_observed_exps portion of the memory. So h should become self.mem_size // (strategy.training_exp_counter + 1)?

When taking the random_split however, then we should take the ‘min’ operation into account:

 h2 = min(capacity_per_exp, len(curr_data))
  rm_add, _ = random_split(
      curr_data, [h2, len(curr_data) - h2]
  )

Am I missing something here or is this a bug?

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 17 (3 by maintainers)

Most upvoted comments

A single PR is ok.

Thank you @Mattdl, I’ll try this immediately!

@AntonioCarta I’m trying to make a solution that’s also applicable for online data incremental learning, therefore we need to update the buffer during training as the data comes in.

The problem I see with your current implementation is that the end-user is not aware that the scenario is data incremental. In Avalanche we separate data streams and strategies, therefore it make sense to me to prepare the mini-batches for a data incremental scenario outside the strategy.

Think about this problem. How do you compare other strategies with CoPE on the same scenario? With your solution it’s much more difficult to understand if each strategy is training on the same experiences, because CoPE de facto splits the experience into multiple smaller experiences. I need to be aware of its internal implementation.

The next step would be to have some kind of meta-experiences (e.g. tasks/domains) to define the scenarios, and then the experiences (batches) for processing. I see it as something like SplitMNIST(n_experiences=5, batchwise_exps=True ) which would create the classic SplitMNIST, but the experiences (as they are implemented now) would increment per batch. But this requires fundamental changes in the creation of the scenarios, so I’d leave that to the Avalanche experts in there (:

This seems the main roadblock that you have right now and the reason why you chose the current solution. I think it’s easy to add a data_incremental_generator to split a predefined benchmark as di_mnist = data_incremental(split_mnist5). We don’t have anything ready right now but @lrzpellegrini can help with that.

@vlomonaco thanks that might work, I’ll give it a try!

Unfortunately I don’t think you will solve any problems with this, it doesn’t look any different from what you’re doing right now.

@AntonioCarta thanks, it indeed traces back to the next() on the AvalancheConcatDataset. If you have any idea what causes this behaviour, let me know (:

I have a vague idea but I’m not sure how to solve it. @lrzpellegrini can you help on this? Is it possible to improve AvalancheConcatDataset to allow a large number of concatenations? Otherwise, do we have any alternative solutions to concatenate datasets that are more efficient?