Model Release
#1
by
Saarstriker - opened
Hi!
Interesting Paper, I am looking into testing the model and the benchmarks.
However, i noticed that you only uploaded the weights and i can not load them as described. Since the model architecture itself is missing
Best
Hi,
Whats the error when loading?
the model is in .pt format, you can simply use torch.load. See the readme or this example https://github.com/SerendipityOneInc/look-bench/blob/main/examples/01_load_grlite_model.py .
Hey!
The model is being loaded as ordered_dict. So only the state dict in my instance.
# Download the model checkpoint from Hugging Face
model_name = "srpone/gr-lite"
print(f" Downloading model from {model_name}...")
model_path = hf_hub_download(
repo_id=model_name,
filename="gr_lite.pt"
)
# Load the PyTorch model
print(f" Loading model checkpoint...")
device = "cuda" if torch.cuda.is_available() else "cpu"
model = torch.load(model_path, map_location=device)
# Set to evaluation mode
if hasattr(model, 'eval'):
model.eval()
print(f" Model loaded on {device}")
print(f"β
GR-Lite model loaded from {model_name}")
print(f" Model type: {type(model)}")
# Check if model has expected methods
if hasattr(model, 'search'):
print(f" Model has .search() method for feature extraction")
elif hasattr(model, 'forward') or hasattr(model, '__call__'):
print(f" Model has forward/call method")
# 4. Inspect model structure
print("\n[4/5] Inspecting model structure...")
print(f"Model type: {type(model)}")
# Check available methods
if hasattr(model, '__dict__'):
print(f"Model attributes: {list(model.__dict__.keys())[:10]}")
# Try to understand the model interface
if hasattr(model, 'search'):
print("β Model has .search() method")
print(" Usage: model.search(image_paths=images, feature_dim=256)")
elif hasattr(model, 'encode'):
print("β Model has .encode() method")
elif hasattr(model, 'forward'):
print("β Model has .forward() method")
else:
print("β Model interface unclear - may need custom preprocessing")
print(type(model))
print(model)
Downloading model from srpone/gr-lite...
Loading model checkpoint...
model = torch.load(model_path, map_location=device)
Model loaded on cpu
β
GR-Lite model loaded from srpone/gr-lite
Model type: <class 'collections.OrderedDict'>
[4/5] Inspecting model structure...
Model type: <class 'collections.OrderedDict'>
Model attributes: ['_metadata']
β Model interface unclear - may need custom preprocessing
<class 'collections.OrderedDict'>
OrderedDict([('model.model.embeddings.cls_token', tensor([[[ 0.2212, 0.0030, 0.0093, ..., 0.0280, -0.0750, 0.0042]]])), ('model.model.embeddings.mask_token', tensor([[[0., 0., 0., ..., 0., 0., 0.]]])), ('model.model.embeddings.register_tokens', tensor([[[ 0.0498, 0.0058, 0.0175, ..., -0.0170, 0.0099, -0.0072],
[ 0.0565, -0.0025, -0.0033, ..., -0.0031, -0.0033, -0.0023],
[ 0.2848, 0.0060, -0.0355, ..., 0.0010, 0.0213, 0.0027],
[ 0.1358, -0.0110, -0.0097, ..., 0.0038, -0.0082, 0.0072]]])), ('model.model.embeddings.patch_embeddings.weight', tensor([[[[ 9.4926e-04, -3.1649e-03, 1.6883e-03, ..., 3.2896e-04,
3.6412e-03, 3.7012e-03],
[-2.3868e-03, -1.2633e-03, -1.3142e-03, ..., 7.9380e-04,
8.3349e-05, 7.0990e-04],
[ 4.2127e-03, -3.7259e-03, -6.9399e-03, ..., 2.6176e-03,
2.5157e-03, 2.1281e-03],
...,
[-8.1405e-03, -4.2286e-03, -2.1547e-03, ..., 1.0677e-02,
9.4072e-03, 9.1431e-03],
[-3.7772e-03, -3.2858e-03, -3.9020e-03, ..., 7.6342e-03,
-5.4816e-04, 1.1635e-02],
[-6.9525e-03, -4.6509e-03, -3.3648e-03, ..., 3.3205e-03,
5.4904e-03, 8.8724e-03]],
...
...,
[-0.0965, -0.0428, 0.0147, ..., 0.1149, 0.0543, -0.0878],
[-0.0235, 0.1640, 0.0035, ..., -0.0304, -0.0123, -0.0062],
[-0.0800, -0.1210, 0.0233, ..., -0.0272, 0.1435, 0.1071]])), ('model.model.layer.23.mlp.down_proj.bias', t