ComfyUI-AnimateDiff-Evolved: [Rare bug] On second attempt after Queue Prompt, all additional runs have: Error occurred when executing KSampler: 'NoneType' object has no attribute '_parameters'
I’ve been trying to generate videos but after 2 successful generations I get the following error in the console:
!!! Exception during processing !!!
Traceback (most recent call last):
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample
return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function
return function_to_wrap(*args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 741, in sample
samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 322, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 310, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch
output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model
return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 633, in forward
h = forward_timestep_embed(self.middle_block, h, emb, context, transformer_options)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed
x = layer(x, context)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward
return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward
hidden_states = self.norm(hidden_states)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward
args, kwargs = hook.pre_forward(module, *args, **kwargs)
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward
set_module_tensor_to_device(
File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device
new_value = value.to(device)
NotImplementedError: Cannot copy out of meta tensor; no data!
This is the first time I’ve tried AnimateDiff so I’m not sure if it’s something I did wrong on my end or if it’s greater than that. I’m running this on Windows 11 with a GTX 1660 6GB, 16GB RAM and a Ryzen 5600X with the latest ComfyUI. I’ve also never opened a GitHub issue before so if there’s anything you need please do tell me.
Here’s the workflow I’m using (i have 256x256 latent image size so that i could quickly generate while i was trying to figure out what was going on):
About this issue
- Original URL
- State: open
- Created 9 months ago
- Comments: 69 (41 by maintainers)
Wanted to provide update: I will have the lowvram check ready in the next couple days, and then I will re-enable model caching for everyone else (currently motion models dont get cached in general).
Perfect, hypothesis confirmed. I’ll work on a potential fix or two (or at least something to help us get closer), and I’ll push that to a separate branch and let you try it out again. I really appreciate your patience in helping me resolve this!
Test 3.5
–Diffusion Model; epicrealism_naturalSin
Result 1 (animatediffMotion_v15):
Result 2 (animatediffMotion_v15):
Result 3 (mm-Stabilized_mid, same seed as Result 2):
Result 4 (temporaldiff-v1-animatediff, same seed as Result 2):
Result 5 (animatediffMotion_v15): Error.
CMD:
Yep, exactly, that’s Test 2. And on Test 3, you will instead change the motion model.
I’ll put it to 128 or 192 for the time being, gonna run test 2 now.
Hi, sorry for late response, just woke up. I will get to testing these immediately.
Sorry for the late response, i just woke up, I will get to testing these immediately.
And if you need any clarification for any of them, let me know!
Be sure to follow the intructions to a T, so that I can draw accurate conclusions from what we get printed in the terminal
@Cedri4 I’m adding you to this thread so that we can chat without me having to retype messages.
As veresmont noticed, model loading in lowvram mode appears to be a possible issue with new code. For context, Cedri4 is rocking only 3GB of VRAM, while veresmont is rocking 6GB. Looking at his cedri4’s logs, he also gets into low vram mode when using the old code, but does not crash on subsequent generations. So, only the new code is unhappy. What is most weird is that ced crashes on his second run, but veresmont does on his third.
I would like you guys to try out a couple things for me on the main branch. Prerequisites:
What to do:
TEST 1 (baseline)
TEST 2 (SD model change)
TEST 3 (ADiff model change)
EXTRA TESTS
After we get results for these tests, I will review results and we can do more tests if needed. This will help me track down the issue immensely, and I can’t replicate this on my end so this is the only way for me to narrow things down.
Of course, I’d be more than happy to