Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing
    • Website
      • Tasks
      • HuggingChat
      • Collections
      • Languages
      • Organizations
    • Community
      • Blog
      • Posts
      • Daily Papers
      • Learn
      • Discord
      • Forum
      • GitHub
    • Solutions
      • Team & Enterprise
      • Hugging Face PRO
      • Enterprise Support
      • Inference Providers
      • Inference Endpoints
      • Storage Buckets

  • Log In
  • Sign Up

solvrays
/
scribegene-llm-v0.4

Image-to-Text
Transformers
Safetensors
English
qwen2_vl
image-text-to-text
vision-language-model
document-understanding
handwritten-text
insurance-forms
vqa
qwen2-vl
lora
qlora
unsloth
medical-forms
ocr-free
Eval Results (legacy)
text-generation-inference
Model card Files Files and versions
xet
Community

Instructions to use solvrays/scribegene-llm-v0.4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use solvrays/scribegene-llm-v0.4 with Transformers:

    # Use a pipeline as a high-level helper
    # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5.
    # You must load the model directly (see below) or downgrade to v4.x with:
    # 'pip install "transformers<5.0.0'
    from transformers import pipeline
    
    pipe = pipeline("image-to-text", model="solvrays/scribegene-llm-v0.4")
    # Load model directly
    from transformers import AutoProcessor, AutoModelForImageTextToText
    
    processor = AutoProcessor.from_pretrained("solvrays/scribegene-llm-v0.4")
    model = AutoModelForImageTextToText.from_pretrained("solvrays/scribegene-llm-v0.4")
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • Unsloth Studio new

    How to use solvrays/scribegene-llm-v0.4 with Unsloth Studio:

    Install Unsloth Studio (macOS, Linux, WSL)
    curl -fsSL https://unsloth.ai/install.sh | sh
    # Run unsloth studio
    unsloth studio -H 0.0.0.0 -p 8888
    # Then open http://localhost:8888 in your browser
    # Search for solvrays/scribegene-llm-v0.4 to start chatting
    Install Unsloth Studio (Windows)
    irm https://unsloth.ai/install.ps1 | iex
    # Run unsloth studio
    unsloth studio -H 0.0.0.0 -p 8888
    # Then open http://localhost:8888 in your browser
    # Search for solvrays/scribegene-llm-v0.4 to start chatting
    Using HuggingFace Spaces for Unsloth
    # No setup required
    # Open https://huggingface.co/spaces/unsloth/studio in your browser
    # Search for solvrays/scribegene-llm-v0.4 to start chatting
    Load model with FastModel
    pip install unsloth
    from unsloth import FastModel
    model, tokenizer = FastModel.from_pretrained(
        model_name="solvrays/scribegene-llm-v0.4",
        max_seq_length=2048,
    )
scribegene-llm-v0.4
16.6 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
singtan's picture
singtan
Upload MDF form reader: Qwen2-VL-7B + QLoRA fine-tune
b997e92 verified about 13 hours ago
  • .gitattributes
    1.57 kB
    Upload MDF form reader: Qwen2-VL-7B + QLoRA fine-tune about 13 hours ago
  • README.md
    5.94 kB
    Upload MDF form reader: Qwen2-VL-7B + QLoRA fine-tune about 13 hours ago
  • chat_template.jinja
    1.02 kB
    Upload MDF form reader: Qwen2-VL-7B + QLoRA fine-tune about 13 hours ago
  • config.json
    2.36 kB
    Upload MDF form reader: Qwen2-VL-7B + QLoRA fine-tune about 13 hours ago
  • generation_config.json
    237 Bytes
    Upload MDF form reader: Qwen2-VL-7B + QLoRA fine-tune about 13 hours ago
  • model.safetensors
    16.6 GB
    xet
    Upload MDF form reader: Qwen2-VL-7B + QLoRA fine-tune about 13 hours ago
  • processor_config.json
    1.26 kB
    Upload MDF form reader: Qwen2-VL-7B + QLoRA fine-tune about 13 hours ago
  • tokenizer.json
    11.4 MB
    xet
    Upload MDF form reader: Qwen2-VL-7B + QLoRA fine-tune about 13 hours ago
  • tokenizer_config.json
    2.98 kB
    Upload MDF form reader: Qwen2-VL-7B + QLoRA fine-tune about 13 hours ago