Text Generation
Transformers
Safetensors
English
mistral
qlora
dto
Eval Results (legacy)
text-generation-inference
Instructions to use senseable/garten2-7b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use senseable/garten2-7b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="senseable/garten2-7b")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("senseable/garten2-7b") model = AutoModelForCausalLM.from_pretrained("senseable/garten2-7b") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use senseable/garten2-7b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "senseable/garten2-7b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "senseable/garten2-7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/senseable/garten2-7b
- SGLang
How to use senseable/garten2-7b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "senseable/garten2-7b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "senseable/garten2-7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "senseable/garten2-7b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "senseable/garten2-7b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use senseable/garten2-7b with Docker Model Runner:
docker model run hf.co/senseable/garten2-7b
Smart and Cohesive
#1
by koesn - opened
I using endless Mistral/Llama model variants for summarizing and legal clauses reasoning/comparation. I ended up running Solar, since it's understands more. Now this model astonished me, it's smart and cohesive. It understands the clauses and able to detects differences more accurate than other Mistral variants. Now I use this as my daily driver replacing Solar. It would be nice if your provides more quants (Q4_K_M, Q6_K, Q8_0). Thank's a lot Senseable!