Skip to content

OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'.  #555

@theshi-1128

Description

@theshi-1128

(ls) liu@liu-System-Product-Name:~/ls/ControlNet-main$ python tool_add_control.py ./models/v1-5-pruned.ckpt ./models/control_sd15_ini.ckpt
logging improved.
No module 'xformers'. Proceeding without it.
ControlLDM: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Traceback (most recent call last):
File "/home/liu/ls/ControlNet-main/tool_add_control.py", line 27, in
model = create_model(config_path='./models/cldm_v15.yaml')
File "/home/liu/ls/ControlNet-main/cldm/model.py", line 26, in create_model
model = instantiate_from_config(config.model).cpu()
File "/home/liu/ls/ControlNet-main/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/home/liu/ls/ControlNet-main/cldm/cldm.py", line 311, in init
super().init(*args, **kwargs)
File "/home/liu/ls/ControlNet-main/ldm/models/diffusion/ddpm.py", line 565, in init
self.instantiate_cond_stage(cond_stage_config)
File "/home/liu/ls/ControlNet-main/ldm/models/diffusion/ddpm.py", line 632, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/home/liu/ls/ControlNet-main/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/home/liu/ls/ControlNet-main/ldm/modules/encoders/modules.py", line 99, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/home/liu/anaconda3/envs/ls/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2029, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

how can i fix it?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions