Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Vezora
/
WizardOrca-7bv2-lora

PEFT
Model card Files Files and versions
xet
Community

Instructions to use Vezora/WizardOrca-7bv2-lora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • PEFT

    How to use Vezora/WizardOrca-7bv2-lora with PEFT:

    from peft import PeftModel
    from transformers import AutoModelForCausalLM
    
    base_model = AutoModelForCausalLM.from_pretrained("models\Llama-2-7b-chat-hf")
    model = PeftModel.from_pretrained(base_model, "Vezora/WizardOrca-7bv2-lora")
  • Notebooks
  • Google Colab
  • Kaggle
WizardOrca-7bv2-lora
268 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 9 commits
Vezora's picture
Vezora
Update README.md
eabcfbe almost 3 years ago
  • .gitattributes
    1.52 kB
    initial commit almost 3 years ago
  • README.md
    834 Bytes
    Update README.md almost 3 years ago
  • adapter_config.json
    427 Bytes
    Upload 6 files almost 3 years ago
  • adapter_model.bin

    Detected Pickle imports (3)

    • "torch.FloatStorage",
    • "torch._utils._rebuild_tensor_v2",
    • "collections.OrderedDict"

    What is a pickle import?

    268 MB
    xet
    Upload 6 files almost 3 years ago
  • training_log.json
    456 Bytes
    Upload 6 files almost 3 years ago
  • training_parameters.json
    736 Bytes
    Upload 6 files almost 3 years ago
  • training_prompt.json
    481 Bytes
    Upload 6 files almost 3 years ago