SHARK: *539.exe always errors out

Wget working correctly, as per a suggestion in a similar issue. (Win11 for workstations, 32GB ram, RX6800XT)

Command prompt output below:

C:\SD>shark_sd_20230216_539.exe shark_tank local cache is located at C:\Users\consolation.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag vulkan devices are available. cuda devices are not available. Running on local URL: http://0.0.0.0:8080

To create a public link, set share=True in launch(). Found device AMD Radeon RX 6800 XT. Using target triple rdna2-unknown-windows. Using tuned models for Linaqruf/anything-v3.0/fp16/vulkan://00000000-2d00-0000-0000-000000000000. Downloading (…)cheduler_config.json: 100%|█████████████████████| 341/341 [00:00<00:00, 341kB/s] huggingface_hub\file_download.py:129: UserWarning: huggingface_hub cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\consolation.cache\huggingface\diffusers. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the HF_HUB_DISABLE_SYMLINKS_WARNING environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations. To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development torch\jit_check.py:172: UserWarning: The TorchScript type system doesn’t support instance-level annotations on empty non-base types in __init__. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in torch.jit.Attribute. warnings.warn("The TorchScript type system doesn’t support " No vmfb found. Compiling and saving to C:\SD\euler_scale_model_input_1_512_512fp16.vmfb Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args Saved vmfb in C:\SD\euler_scale_model_input_1_512_512fp16.vmfb. WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer does not conform to naming standard (Policy #LLP_LAYER_3) WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_VERBOSE does not conform to naming standard (Policy #LLP_LAYER_3) WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_DEBUG does not conform to naming standard (Policy #LLP_LAYER_3) WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files. No vmfb found. Compiling and saving to C:\SD\euler_step_1_512_512fp16.vmfb Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args Saved vmfb in C:\SD\euler_step_1_512_512fp16.vmfb. WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer does not conform to naming standard (Policy #LLP_LAYER_3) WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_VERBOSE does not conform to naming standard (Policy #LLP_LAYER_3) WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_DEBUG does not conform to naming standard (Policy #LLP_LAYER_3) WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files. Inferring base model configuration. Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:

pip install accelerate

. Downloading (…)_pytorch_model.bin";: 100%|████████████████| 3.44G/3.44G [01:26<00:00, 39.9MB/s] Downloading (…)ain/unet/config.json: 100%|█████████████████████| 901/901 [00:00<00:00, 901kB/s] Retrying with a different base model configuration Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:

pip install accelerate

. torch\fx\node.py:250: UserWarning: Trying to prepend a node to itself. This behavior has no effect on the graph. warnings.warn(“Trying to prepend a node to itself. This behavior has no effect on the graph.”) Loading Winograd config file from C:\Users\consolation.local/shark_tank/configs/unet_winograd_vulkan.json Retrying with a different base model configuration Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:

pip install accelerate

. Retrying with a different base model configuration Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:

pip install accelerate

. Retrying with a different base model configuration Traceback (most recent call last): File “gradio\routes.py”, line 374, in run_predict File “gradio\blocks.py”, line 1017, in process_api File “gradio\blocks.py”, line 835, in call_function File “anyio\to_thread.py”, line 31, in run_sync File “anyio_backends_asyncio.py”, line 937, in run_sync_in_worker_thread File “anyio_backends_asyncio.py”, line 867, in run File “apps\stable_diffusion\scripts\txt2img.py”, line 116, in txt2img_inf File “apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py”, line 220, in from_pretrained File “apps\stable_diffusion\src\models\model_wrappers.py”, line 383, in call SystemExit: Cannot compile the model. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues Keyboard interruption in main thread… closing server.

Happy to try any suggestions, TIA

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 19 (8 by maintainers)

Most upvoted comments

So maybe it’s the local shark_tank folder not appropriately created? Can you try to set it to another directory with --local_tank_cache= flag?

All models and custom ones work correctly btw. Should I mark this as closed? I’m guessing that a bunch of the threads can be fixed with this. Also, glad I could help - if you ever need a crash test dummy for builds, let me know.