Image-to-Text
Transformers
Safetensors
English
qwen2_vl
image-text-to-text
vision-language-model
document-understanding
handwritten-text
insurance-forms
vqa
qwen2-vl
lora
qlora
unsloth
medical-forms
ocr-free
Eval Results (legacy)
text-generation-inference
Instructions to use solvrays/scribegene-llm-v0.4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use solvrays/scribegene-llm-v0.4 with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("image-to-text", model="solvrays/scribegene-llm-v0.4")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("solvrays/scribegene-llm-v0.4") model = AutoModelForImageTextToText.from_pretrained("solvrays/scribegene-llm-v0.4") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Unsloth Studio new
How to use solvrays/scribegene-llm-v0.4 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for solvrays/scribegene-llm-v0.4 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for solvrays/scribegene-llm-v0.4 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for solvrays/scribegene-llm-v0.4 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="solvrays/scribegene-llm-v0.4", max_seq_length=2048, )
File size: 1,259 Bytes
b997e92 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | {
"image_processor": {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"merge_size": 2,
"patch_size": 14,
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
},
"processor_class": "Qwen2VLProcessor",
"video_processor": {
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"do_sample_frames": false,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"max_frames": 768,
"merge_size": 2,
"min_frames": 4,
"patch_size": 14,
"resample": 3,
"rescale_factor": 0.00392156862745098,
"return_metadata": false,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2,
"video_processor_type": "Qwen2VLVideoProcessor"
}
}
|