transformers: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name 'PartialState' from 'accelerate'

System Info

I am trying to import Segment Anything Model (SAM) using transformers pipeline. But this gives the following error : " RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name ‘PartialState’ from ‘accelerate’ (/opt/conda/lib/python3.10/site-packages/accelerate/init.py)"

What i am trying to do : " from transformers import pipeline generator = pipeline(“mask-generation”, model=“facebook/sam-vit-huge”, device=0) "

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, …)
  • My own task or dataset (give details below)

Reproduction

import this line:

from transformers import pipeline generator = pipeline(“mask-generation”, model=“facebook/sam-vit-huge”, device=0)

Expected behavior

The model should import as per this notebook in official tutorials: https://github.com/huggingface/notebooks/blob/main/examples/automatic_mask_generation.ipynb

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 2
  • Comments: 39

Most upvoted comments

Hi @MitchellMonaghan, @Abhranta,

Could you try upgrading the installed version of accelerate in your env: pip install -U accelerate?

You need a more recent version of Accelerate @AzzedineAftiss: pip install --upgrade accelerate.

Thanks this resolved this error thanks. It upgraded the accelerate package from.

Installing collected packages: accelerate
  Attempting uninstall: accelerate
    Found existing installation: accelerate 0.15.0.dev0
    Uninstalling accelerate-0.15.0.dev0:
      Successfully uninstalled accelerate-0.15.0.dev0
Successfully installed accelerate-0.19.0

Thank you 🤗 so much for the amazing work, making it so easy to try and learn about the best models in the world. 🙏


Ref: https://huggingface.co/blog/falcon

Got the same error as this issue while running the Falcon tutorial by HuggingFace on Kaggle. Came across this thread via a Google search (not from an LLM yet 😉) and had to make the following changes to get the Falcon tutorial to work on Kaggle notebooks:

pip install -q --upgrade accelerate einops xformers
  • The accelerate needs to be upgraded as mentioned in this thread.
  • Additional packages in einops and xformers needs to be installed as well.

My Notebook on Kaggle: https://www.kaggle.com/bkowshik/llms-models-falcon

NOTE: Had to rerun a couple of times given memory issues on Kaggle, so one needs to keep 🤞

Write a poem about India

A land of mystic and ancient lore,
where sacred rivers flow and mountains soar.
In India, the sun is in a brilliant glow,
cascading the hues that paint the sky like a magical show.

From Kanyakumari to Kashmir,
the beauty of India never fails to garner.
Its rich cultural heritage with its myriad hues,
and a kaleidoscope of colors, India is blessed.

Tigers roam in the dense forests,
cascading sound of the Ganges, and its gentle whispers.
The intricate handloom woven sarees in red,
a symphony of colors in India's head.

The holy pilgrimage of the sacred mountains,
the golden glow of Diwali, a festival of lights.
India is the land of the brave and true,
a melting pot of religions, cultures and hues!

@sgugger @amyeroberts Should we can close this issue then?

Complete logs with warning messages printed as part of the output for reference.

Downloading (…)okenizer_config.json: 100%
220/220 [00:00<00:00, 14.6kB/s]
Downloading (…)/main/tokenizer.json:
2.73M/? [00:00<00:00, 6.12MB/s]
Downloading (…)cial_tokens_map.json: 100%
281/281 [00:00<00:00, 22.8kB/s]
/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so']
caused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so: undefined symbol: _ZN3tsl6StatusC1EN10tensorflow5error4CodeESt17basic_string_viewIcSt11char_traitsIcEENS_14SourceLocationE']
  warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}")
/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']
caused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so: undefined symbol: _ZTVN10tensorflow13GcsFileSystemE']
  warnings.warn(f"file system plugins are not loaded: {e}")
Downloading (…)lve/main/config.json: 100%
667/667 [00:00<00:00, 33.3kB/s]
Downloading (…)/configuration_RW.py:
2.61k/? [00:00<00:00, 165kB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-7b-instruct:
- configuration_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (…)main/modelling_RW.py:
47.5k/? [00:00<00:00, 2.70MB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-7b-instruct:
- modelling_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (…)model.bin.index.json:
16.9k/? [00:00<00:00, 850kB/s]
Downloading shards: 100%
2/2 [01:13<00:00, 34.78s/it]
Downloading (…)l-00001-of-00002.bin: 100%
9.95G/9.95G [00:49<00:00, 278MB/s]
Downloading (…)l-00002-of-00002.bin: 100%
4.48G/4.48G [00:24<00:00, 169MB/s]
Loading checkpoint shards: 100%
2/2 [01:15<00:00, 35.25s/it]
Downloading (…)neration_config.json: 100%
111/111 [00:00<00:00, 5.67kB/s]
The model 'RWForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].

This Error suddenly pops up in kaggle. Any IDEA!!! I already tried installing accelerate, transformers and datasets as the first line to execute in each notebooks.

ImportError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1172, in _LazyModule._get_module(self, module_name) 1171 try: -> 1172 return importlib.import_module(“.” + module_name, self.name) 1173 except Exception as e:

File /opt/conda/lib/python3.10/importlib/init.py:126, in import_module(name, package) 125 level += 1 –> 126 return _bootstrap._gcd_import(name[level:], package, level)

File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)

File <frozen importlib._bootstrap>:1027, in find_and_load(name, import)

File <frozen importlib._bootstrap>:1006, in find_and_load_unlocked(name, import)

File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)

File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)

File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)

File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/init.py:44 35 from …utils import ( 36 HUGGINGFACE_CO_RESOLVE_ENDPOINT, 37 is_kenlm_available, (…) 42 logging, 43 ) —> 44 from .audio_classification import AudioClassificationPipeline 45 from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline

File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/audio_classification.py:21 20 from …utils import add_end_docstrings, is_torch_available, logging —> 21 from .base import PIPELINE_INIT_ARGS, Pipeline 24 if is_torch_available():

File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py:36 35 from …image_processing_utils import BaseImageProcessor —> 36 from …modelcard import ModelCard 37 from …models.auto.configuration_auto import AutoConfig

File /opt/conda/lib/python3.10/site-packages/transformers/modelcard.py:48 32 from .models.auto.modeling_auto import ( 33 MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES, 34 MODEL_FOR_CAUSAL_LM_MAPPING_NAMES, (…) 46 MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES, 47 ) —> 48 from .training_args import ParallelMode 49 from .utils import ( 50 MODEL_CARD_NAME, 51 cached_file, (…) 57 logging, 58 )

File /opt/conda/lib/python3.10/site-packages/transformers/training_args.py:67 66 if is_accelerate_available(): —> 67 from accelerate import PartialState 68 from accelerate.utils import DistributedType

ImportError: cannot import name ‘PartialState’ from ‘accelerate’ (/opt/conda/lib/python3.10/site-packages/accelerate/init.py)

The above exception was the direct cause of the following exception:

RuntimeError Traceback (most recent call last) File <timed exec>:2

File <frozen importlib._bootstrap>:1075, in handle_fromlist(module, fromlist, import, recursive)

File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1162, in _LazyModule.getattr(self, name) 1160 value = self._get_module(name) 1161 elif name in self._class_to_module.keys(): -> 1162 module = self._get_module(self._class_to_module[name]) 1163 value = getattr(module, name) 1164 else:

File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1174, in _LazyModule._get_module(self, module_name) 1172 return importlib.import_module(“.” + module_name, self.name) 1173 except Exception as e: -> 1174 raise RuntimeError( 1175 f"Failed to import {self.name}.{module_name} because of the following error (look up to see its" 1176 f" traceback):\n{e}" 1177 ) from e

RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name ‘PartialState’ from ‘accelerate’ (/opt/conda/lib/python3.10/site-packages/accelerate/init.py)

@RayGone Have posted details of the fix here: https://github.com/huggingface/transformers/issues/23340#issuecomment-1606719159

Thanks, will try that. Didn’t try that because i wasn’t using xformers directly. But i guess its used by some other dependecy.

@sgugger @amyeroberts as annoying as it is, but pipeline in kaggle is not working as seen in screenshot above.

It didn’t work even when i did this: !pip install transformers tokenizers datasets huggingface_hub --upgrade -q !pip install accelerator --upgrade -q

Thanks @bkowshik it worked!