autotrain-advanced: ERROR train has failed due to an exception
I get this error when trying to run cells. 2 hours ago it was fine(
Steps: 0% 0/500 [00:00<?, ?it/s]> ERROR train has failed due to an exception:
> ERROR Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/autotrain/utils.py", line 280, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/__main__.py", line 311, in train
trainer.train()
File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/trainer.py", line 404, in train
model_pred = self._get_model_pred(batch, channels, noisy_model_input, timesteps, bsz)
File "/usr/local/lib/python3.10/dist-packages/autotrain/trainers/dreambooth/trainer.py", line 302, in _get_model_pred
model_pred = self.unet(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 659, in forward
return model_forward(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 647, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/usr/local/lib/python3.10/dist-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_condition.py", line 958, in forward
sample, res_samples = downsample_block(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_blocks.py", line 1076, in forward
hidden_states = attn(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/transformer_2d.py", line 303, in forward
hidden_states = torch.utils.checkpoint.checkpoint(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py", line 251, in checkpoint
return _checkpoint_without_reentrant(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py", line 432, in _checkpoint_without_reentrant
output = function(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention.py", line 218, in forward
attn_output = self.attn2(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 417, in forward
return self.processor(
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 952, in __call__
hidden_states = xformers.ops.memory_efficient_attention(
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/__init__.py", line 223, in memory_efficient_attention
return _memory_efficient_attention(
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/__init__.py", line 326, in _memory_efficient_attention
return _fMHA.apply(
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/__init__.py", line 42, in forward
out, op_ctx = _memory_efficient_attention_forward_requires_grad(
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/__init__.py", line 348, in _memory_efficient_attention_forward_requires_grad
inp.validate_inputs()
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/common.py", line 112, in validate_inputs
raise ValueError(
ValueError: Query/Key/Value should either all have the same dtype, or (in the quantized case) Key/Value should have dtype torch.int32
query.dtype: torch.float32
key.dtype : torch.float16
value.dtype: torch.float16
About this issue
- Original URL
- State: closed
- Created 8 months ago
- Comments: 19 (7 by maintainers)
ill take a look at colab notebook in that case. the release was only done after the local tests passed. for now, you can also pin autotrain to previous version!
I think its the warning. Something off with
torchvisionhence the error?