Incorrect model architecture for PathGen-CLIP-L

#1
by lijinda - opened

The current usage example for loading PathGen-CLIP-L seems to specify the wrong model architecture.

The code snippet is:

model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-16', pretrained='path/pathgen-clip-l.pt') // PathGen-CLIP-L
tokenizer = open_clip.get_tokenizer('ViT-B-16')

However, PathGen-CLIP-L is typically based on the ViT-L-14 architecture. Therefore, I think the code should be updated to the following.

model, _, preprocess = open_clip.create_model_and_transforms('ViT-L-14', pretrained=model_path)
tokenizer = open_clip.get_tokenizer('ViT-L-14')

Sign up or log in to comment