Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Neroism8422
/
mol-instruct-base-llava-untuned
like
0
Safetensors
llava
custom_code
Model card
Files
Files and versions
xet
Community
main
mol-instruct-base-llava-untuned
17.4 GB
Ctrl+K
Ctrl+K
1 contributor
History:
5 commits
Neroism8422
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
aea55e2
verified
10 months ago
.gitattributes
Safe
1.57 kB
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
chat_template.jinja
Safe
389 Bytes
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
config.json
Safe
4.8 kB
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
model-00001-of-00004.safetensors
4.98 GB
xet
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
model-00002-of-00004.safetensors
5 GB
xet
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
model-00003-of-00004.safetensors
4.92 GB
xet
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
model-00004-of-00004.safetensors
Safe
2.47 GB
xet
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
model.safetensors.index.json
Safe
70.1 kB
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
modeling_llava.py
Safe
9.79 kB
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
preprocessor_config.json
Safe
232 Bytes
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
processing_llava.py
Safe
2.36 kB
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
special_tokens_map.json
Safe
325 Bytes
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
tokenizer.json
17.2 MB
xet
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago
tokenizer_config.json
Safe
50.8 kB
This is the basic llava version of the original mol-instruct model, none tuned with only vision encoder of CLIP add on.
10 months ago