Instructions to use itspat/RemmMistral13b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use itspat/RemmMistral13b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="itspat/RemmMistral13b")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("itspat/RemmMistral13b") model = AutoModelForCausalLM.from_pretrained("itspat/RemmMistral13b") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use itspat/RemmMistral13b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "itspat/RemmMistral13b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "itspat/RemmMistral13b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/itspat/RemmMistral13b
- SGLang
How to use itspat/RemmMistral13b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "itspat/RemmMistral13b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "itspat/RemmMistral13b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "itspat/RemmMistral13b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "itspat/RemmMistral13b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use itspat/RemmMistral13b with Docker Model Runner:
docker model run hf.co/itspat/RemmMistral13b
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("itspat/RemmMistral13b")
model = AutoModelForCausalLM.from_pretrained("itspat/RemmMistral13b")Quick Links
Re:MythoMax-Mistral (ReMM-Mistral) is a recreation trial of the original MythoMax-L2-B13 with updated models and data of Mistral.
This merge use Gradient merging method to merge ReML-Mistral v2.2 and Huginn.
Explaination :
- ReML-Mistral-v2.2: (Chronos-Beluga v2/Hermes/Airboros 2.2.1 + LoRA)
=> The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16
=> jondurbin/airoboros-l2-13b-2.2 by jondurbin/airoboros-l2-13b-2.2.1
=> NousResearch/Nous-Hermes-Llama2-13b
=> Applying Undi95/llama2-to-mistral-diff at 1.0 at the end
With that :
- ReMM-Mistral: (ReML-Mistral-v2.2/Huginn)
=> ReMM by the one above
=> The-Face-Of-Goonery/Huginn-13b-FP16
Description
This repo contains fp16 files of ReMM-Mistral, a recreation of the original MythoMax, but updated and merged with Gradient method and Mistral data.
Models used
- The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16
- jondurbin/airoboros-l2-13b-2.2.1
- NousResearch/Nous-Hermes-Llama2-13b
- The-Face-Of-Goonery/Huginn-13b-FP16
- Undi95/ReML-Mistral-v2.2-13B (Private recreation trial of an updated Mythologic-L2-13B with Mistral data)
- Undi95/llama2-to-mistral-diff
Prompt template: Alpaca
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
A big thanks to KoboldAI dev' Henky for giving me a machine powerfull enough to do some of my work!
If you want to support me, you can here.
- Downloads last month
- 4
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="itspat/RemmMistral13b")