Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

EmbeddingStudio
/
query-parser-saiga-mistral-7b-lora

Text Generation
PEFT
Safetensors
Russian
mistral
saiga
search-queries
instruct-fine-tuned
search-queries-parser
zero-shot
llm
instuct
query parsing
Synthetic
Model card Files Files and versions
xet
Community

Instructions to use EmbeddingStudio/query-parser-saiga-mistral-7b-lora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • PEFT

    How to use EmbeddingStudio/query-parser-saiga-mistral-7b-lora with PEFT:

    from peft import PeftModel
    from transformers import AutoModelForCausalLM
    
    base_model = AutoModelForCausalLM.from_pretrained("Open-Orca/Mistral-7B-OpenOrca")
    model = PeftModel.from_pretrained(base_model, "EmbeddingStudio/query-parser-saiga-mistral-7b-lora")
  • Notebooks
  • Google Colab
  • Kaggle
query-parser-saiga-mistral-7b-lora
55.1 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 7 commits
chilly-magician's picture
chilly-magician
[add]: description
f3f8500 verified over 2 years ago
  • .gitattributes
    1.52 kB
    initial commit over 2 years ago
  • README.md
    45.3 kB
    [add]: description over 2 years ago
  • adapter_config.json
    480 Bytes
    Upload model over 2 years ago
  • adapter_model.safetensors
    54.6 MB
    xet
    Upload model over 2 years ago
  • added_tokens.json
    51 Bytes
    Upload tokenizer over 2 years ago
  • special_tokens_map.json
    552 Bytes
    Upload tokenizer over 2 years ago
  • tokenizer.model
    493 kB
    xet
    Upload tokenizer over 2 years ago
  • tokenizer_config.json
    1.35 kB
    Upload tokenizer over 2 years ago