ComfyUI-AnimateDiff-Evolved: [Rare bug] On second attempt after Queue Prompt, all additional runs have: Error occurred when executing KSampler: 'NoneType' object has no attribute '_parameters'

I’ve been trying to generate videos but after 2 successful generations I get the following error in the console:

!!! Exception during processing !!!
Traceback (most recent call last):
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample
    return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function
    return function_to_wrap(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 741, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 322, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 310, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 633, in forward
    h = forward_timestep_embed(self.middle_block, h, emb, context, transformer_options)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed
    x = layer(x, context)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward
    return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward
    hidden_states = self.norm(hidden_states)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
    args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward
    args, kwargs = hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward
    set_module_tensor_to_device(
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device
    new_value = value.to(device)
NotImplementedError: Cannot copy out of meta tensor; no data!

This is the first time I’ve tried AnimateDiff so I’m not sure if it’s something I did wrong on my end or if it’s greater than that. I’m running this on Windows 11 with a GTX 1660 6GB, 16GB RAM and a Ryzen 5600X with the latest ComfyUI. I’ve also never opened a GitHub issue before so if there’s anything you need please do tell me.

Here’s the workflow I’m using (i have 256x256 latent image size so that i could quickly generate while i was trying to figure out what was going on): workflow

About this issue

  • Original URL
  • State: open
  • Created 9 months ago
  • Comments: 69 (41 by maintainers)

Most upvoted comments

Wanted to provide update: I will have the lowvram check ready in the next couple days, and then I will re-enable model caching for everyone else (currently motion models dont get cached in general).

Perfect, hypothesis confirmed. I’ll work on a potential fix or two (or at least something to help us get closer), and I’ll push that to a separate branch and let you try it out again. I really appreciate your patience in helping me resolve this!

Test 3.5

–Diffusion Model; epicrealism_naturalSin

Result 1 (animatediffMotion_v15): aaa_readme_00006_

Result 2 (animatediffMotion_v15): aaa_readme_00007_

Result 3 (mm-Stabilized_mid, same seed as Result 2): aaa_readme_00008_

Result 4 (temporaldiff-v1-animatediff, same seed as Result 2): aaa_readme_00009_

Result 5 (animatediffMotion_v15): Error.

CMD:

D:\apps\AI\COMFYUI2\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --force-fp32 --preview-method auto --normalvram --use-pytorch-cross-attention --disable-xformers
Total VRAM 6144 MB, total RAM 16307 MB
Forcing FP32, if this improves things please report it.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 : cudaMallocAsync
VAE dtype: torch.float32
Using pytorch cross attention
Using pytorch cross attention

Import times for custom nodes:
   0.0 seconds: D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
adm 0
making attention of type 'vanilla-pytorch' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-pytorch' with 512 in_channels
missing {'cond_stage_model.text_projection', 'cond_stage_model.logit_scale'}
left over keys: dict_keys(['cond_stage_model.transformer.text_model.embeddings.position_ids', 'model_ema.decay', 'model_ema.num_updates'])
[AnimateDiffEvo] - INFO - Loading motion module animatediffMotion_v15.ckpt
loading new
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  return self.fget.__get__(instance, owner)()
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [01:52<00:00, 11.26s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32))
D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32))
Prompt executed in 174.87 seconds
got prompt
2
3
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
loading in lowvram mode 1763.1319274902344
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:20<00:00,  2.02s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
Prompt executed in 23.46 seconds
got prompt
2
3
[AnimateDiffEvo] - INFO - Loading motion module mm-Stabilized_mid.pth
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module mm-Stabilized_mid.pth version v1.
loading new
loading in lowvram mode 1907.7473125457764
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:18<00:00,  1.83s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module mm-Stabilized_mid.pth version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
Prompt executed in 36.87 seconds
got prompt
2
3
[AnimateDiffEvo] - INFO - Loading motion module temporaldiff-v1-animatediff.ckpt
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module temporaldiff-v1-animatediff.ckpt version v1.
loading new
loading in lowvram mode 892.1241903305054
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:19<00:00,  1.92s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module temporaldiff-v1-animatediff.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
Prompt executed in 53.53 seconds
got prompt
2
3
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module animatediffMotion_v15.ckpt version v1.
loading new
loading in lowvram mode 773.8020000457764
  0%|                                                                                           | 0/10 [00:01<?, ?it/s]
[AnimateDiffEvo] - INFO - Ejecting motion module animatediffMotion_v15.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
!!! Exception during processing !!!
Traceback (most recent call last):
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1236, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 161, in animatediff_sample
    return wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_utils.py", line 144, in wrapped_function
    return function_to_wrap(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 741, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 322, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 310, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 527, in sliding_sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 427, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 653, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 71, in forward_timestep_embed
    x = layer(x, context)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 398, in forward
    return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\motion_module.py", line 461, in forward
    hidden_states = self.norm(hidden_states)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
    args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 104, in pre_forward
    args, kwargs = hook.pre_forward(module, *args, **kwargs)
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 286, in pre_forward
    set_module_tensor_to_device(
  File "D:\apps\AI\COMFYUI2\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 298, in set_module_tensor_to_device
    new_value = value.to(device)
NotImplementedError: Cannot copy out of meta tensor; no data!

Prompt executed in 24.64 seconds

Yep, exactly, that’s Test 2. And on Test 3, you will instead change the motion model.

I’ll put it to 128 or 192 for the time being, gonna run test 2 now.

Hi, sorry for late response, just woke up. I will get to testing these immediately.

@Cedri4 I’m adding you to this thread so that we can chat without me having to retype messages.

As veresmont noticed, model loading in lowvram mode appears to be a possible issue with new code. For context, Cedri4 is rocking only 3GB of VRAM, while veresmont is rocking 6GB. Looking at his cedri4’s logs, he also gets into low vram mode when using the old code, but does not crash on subsequent generations. So, only the new code is unhappy. What is most weird is that ced crashes on his second run, but veresmont does on his third.

I would like you guys to try out a couple things for me on the main branch. Prerequisites:

  • have at least 2 SD1.5-compatible checkpoints to use
  • have at least 2 motion modules to use

What to do:

TEST 1 (baseline)

  1. boot up comfy ui
  2. load the basic txt2img workflow in the main branch
  3. select an SD model to use. We’ll call this SD model A.
  4. select a motion model to use (anything but 15_v2). We’ll call this ADiff model A.
  5. run the workflow once.
  6. increment the seed
  7. run it again.
  8. EXPECTED BEHAVIOR: ced should now be crashed. Veresmont, run it once more so you crash too, and make note if the second image has any visible image degradation (or just post em here so I can compare). Copy all command line output, and send it my way as TEST 1 Results.
  9. shut down comfy

TEST 2 (SD model change)

  1. do steps 1-5 from TEST 1.
  2. keep the seed the same, but now change to use your second SD model; we’ll call this SD model B. DO NOT CHANGE YOUR MOTION MODEL.
  3. run it again.
  4. EXPECTED BEHAVIOR: this is where things might change. Ced, note if you have crashed now or not. Veresmont, run again and note if you have crashed. In the case you have not crashed yet, run it again with the same SD model B, DO NOT CHANGE THE MOTION MODEL. And note the results. Send the command line output as TEST 2 results.
  5. shut down comfy

TEST 3 (ADiff model change)

  1. do everything in TEST 2 but instead of switching SD models from A to B, stick with SD model A the whole time, and instead, switch from ADiff model A to ADiff model B after the first run. As before, note differences and try to run until you get the error. Results for this are Test 3 results.
  2. remember to shut down comfy

EXTRA TESTS

  1. If TEST 2 or TEST 3 (or both) yielded results different from TEST 1, repeat the tests that yielded different results, but this time, before running the workflow the nth time, where n is the run that would cause you to crash in TEST 2 or TEST 3, switch the SD model back to A (TEST 2) or switch the ADiff model back to A (TEST 3), and run it. If you don’t crash at this point, switch back to models B (based on which test you’re in), and run it again. If you still haven’t crashed, run it again without changing models until you crash. Note the results as TEST2ALT and/or TEST3ALT.
  2. shut down comfy

After we get results for these tests, I will review results and we can do more tests if needed. This will help me track down the issue immensely, and I can’t replicate this on my end so this is the only way for me to narrow things down.

Sorry for the late response, i just woke up, I will get to testing these immediately.

And if you need any clarification for any of them, let me know!

Be sure to follow the intructions to a T, so that I can draw accurate conclusions from what we get printed in the terminal

@Cedri4 I’m adding you to this thread so that we can chat without me having to retype messages.

As veresmont noticed, model loading in lowvram mode appears to be a possible issue with new code. For context, Cedri4 is rocking only 3GB of VRAM, while veresmont is rocking 6GB. Looking at his cedri4’s logs, he also gets into low vram mode when using the old code, but does not crash on subsequent generations. So, only the new code is unhappy. What is most weird is that ced crashes on his second run, but veresmont does on his third.

I would like you guys to try out a couple things for me on the main branch. Prerequisites:

  • have at least 2 SD1.5-compatible checkpoints to use
  • have at least 2 motion modules to use

What to do:

TEST 1 (baseline)

  1. boot up comfy ui
  2. load the basic txt2img workflow in the main branch
  3. select an SD model to use. We’ll call this SD model A.
  4. select a motion model to use (anything but 15_v2). We’ll call this ADiff model A.
  5. run the workflow once.
  6. increment the seed
  7. run it again.
  8. EXPECTED BEHAVIOR: ced should now be crashed. Veresmont, run it once more so you crash too, and make note if the second image has any visible image degradation (or just post em here so I can compare). Copy all command line output, and send it my way as TEST 1 Results.
  9. shut down comfy

TEST 2 (SD model change)

  1. do steps 1-5 from TEST 1.
  2. keep the seed the same, but now change to use your second SD model; we’ll call this SD model B. DO NOT CHANGE YOUR MOTION MODEL.
  3. run it again.
  4. EXPECTED BEHAVIOR: this is where things might change. Ced, note if you have crashed now or not. Veresmont, run again and note if you have crashed. In the case you have not crashed yet, run it again with the same SD model B, DO NOT CHANGE THE MOTION MODEL. And note the results. Send the command line output as TEST 2 results.
  5. shut down comfy

TEST 3 (ADiff model change)

  1. do everything in TEST 2 but instead of switching SD models from A to B, stick with SD model A the whole time, and instead, switch from ADiff model A to ADiff model B after the first run. As before, note differences and try to run until you get the error. Results for this are Test 3 results.
  2. remember to shut down comfy

EXTRA TESTS

  1. If TEST 2 or TEST 3 (or both) yielded results different from TEST 1, repeat the tests that yielded different results, but this time, before running the workflow the nth time, where n is the run that would cause you to crash in TEST 2 or TEST 3, switch the SD model back to A (TEST 2) or switch the ADiff model back to A (TEST 3), and run it. If you don’t crash at this point, switch back to models B (based on which test you’re in), and run it again. If you still haven’t crashed, run it again without changing models until you crash. Note the results as TEST2ALT and/or TEST3ALT.
  2. shut down comfy

After we get results for these tests, I will review results and we can do more tests if needed. This will help me track down the issue immensely, and I can’t replicate this on my end so this is the only way for me to narrow things down.

Of course, I’d be more than happy to