Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Alogotron
/
Qwen2.5-3B-JSON-StructuredOutput

Text Generation
PEFT
Safetensors
English
lora
json
structured-output
qwen2
sft
trl
conversational
Model card Files Files and versions
xet
Community

Instructions to use Alogotron/Qwen2.5-3B-JSON-StructuredOutput with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • PEFT

    How to use Alogotron/Qwen2.5-3B-JSON-StructuredOutput with PEFT:

    from peft import PeftModel
    from transformers import AutoModelForCausalLM
    
    base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-3B-Instruct")
    model = PeftModel.from_pretrained(base_model, "Alogotron/Qwen2.5-3B-JSON-StructuredOutput")
  • Notebooks
  • Google Colab
  • Kaggle
Qwen2.5-3B-JSON-StructuredOutput
71.4 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
2reb
Upload folder using huggingface_hub
84b896d verified 2 months ago
  • .gitattributes
    1.57 kB
    Upload folder using huggingface_hub 2 months ago
  • README.md
    1.29 kB
    Upload folder using huggingface_hub 2 months ago
  • adapter_config.json
    1.05 kB
    Upload folder using huggingface_hub 2 months ago
  • adapter_model.safetensors
    59.9 MB
    xet
    Upload folder using huggingface_hub 2 months ago
  • chat_template.jinja
    2.51 kB
    Upload folder using huggingface_hub 2 months ago
  • tokenizer.json
    11.4 MB
    xet
    Upload folder using huggingface_hub 2 months ago
  • tokenizer_config.json
    665 Bytes
    Upload folder using huggingface_hub 2 months ago