ComfyUI-AnimateDiff-Evolved: [PSA] New ComfyUI update came out - update AnimateDiff-Evolved to fix issue (backwards compatible, so updating while using old ComfyUI will not break anything) [November 23rd]

Error presents itself as AttributeError: 'ModelSamplingConfig' object has no attribute 'sampling_settings'. AnimateDiff-Evolved has now been updated to work with the new way ComfyUI expects beta_schedule to be registered.

EDIT: Another ComfyUI update happened late on November 23rd as well, with the resulting error being TypeError: forward_timestep_embed() got an unexpected keyword argument 'time_context'. AnimateDiff-Evolved has now been updated, again, to account for the ComfyUI changes that add Stable Video Diffusion model support. And the changes are backwards compatible (for now. I will remove the check that makes this possible the next time there is a non-backwards compatible change).

About this issue

  • Original URL
  • State: closed
  • Created 7 months ago
  • Reactions: 1
  • Comments: 15 (6 by maintainers)

Most upvoted comments

Something you can try to test where the GPU memory is not being unloaded from, is that I’m assuming you have some preprocessors for the image inputs into your Controlnets. From my testing for just AD, my VRAM usage at sampling time remains the same when changing parameters like steps, etc. to force it to rerun.

However, since you are flying very close to your VRAM limit, almost anything that would cause your VRAM to be used could cause you to reach it. If a preprocessor, for example, does not unload stuff from GPU, even if it’s in the order of a hundred MBs, that could cause you to OOM. You can try to save the images outputted from the preprocessor, and then modify your workflow to use those preprocessed images to plug into your Controlnet without a preprocessor, and see if that makes a difference in subsequent runs (after comfy restart). That would help pinpoint what exactly is not getting cleaned up.

Also, you can try to hide your previews before rerunning - if you have hardware acceleration on, it likely uses up your GPU/VRAM to play it on the browser as well.

Hello, just now, Comfyui has updated a version that supports svd. This version modifies the model’s running function forward_timestep_embed, which seems to conflict with the current version of ComfyUI-AnimateDiff-Evolved.

The following is the error when I run ComfyUI-AnimateDiff-Evolved in the new version of Comfyui:

File “/app/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py”, line 579, in sliding_calc_cond_uncond_batch sub_cond_out, sub_uncond_out = calc_cond_uncond_batch(model, sub_cond, sub_uncond, sub_x, sub_timestep, model_options) File “/app/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py”, line 473, in calc_cond_uncond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) File “/app/ComfyUI/comfy/model_base.py”, line 73, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() File “/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1501, in _call_impl return forward_call(*args, **kwargs) File “/app/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py”, line 854, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) TypeError: forward_timestep_embed() got an unexpected keyword argument ‘time_context’

Just pushed a fix for the latest changes (and should be backwards compatible). Will open up a new PSA post. I’ll edit the original message in this thread.

Yep, another update came out while I was doing thanksgiving stuff. I’m home now, so i should have the changes out in a bit to make it work with latest changes.