Instructions to use rombodawg/LosslessMegaCoder-llama2-7b-mini with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use rombodawg/LosslessMegaCoder-llama2-7b-mini with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="rombodawg/LosslessMegaCoder-llama2-7b-mini")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rombodawg/LosslessMegaCoder-llama2-7b-mini") model = AutoModelForCausalLM.from_pretrained("rombodawg/LosslessMegaCoder-llama2-7b-mini") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use rombodawg/LosslessMegaCoder-llama2-7b-mini with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "rombodawg/LosslessMegaCoder-llama2-7b-mini" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "rombodawg/LosslessMegaCoder-llama2-7b-mini", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/rombodawg/LosslessMegaCoder-llama2-7b-mini
- SGLang
How to use rombodawg/LosslessMegaCoder-llama2-7b-mini with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "rombodawg/LosslessMegaCoder-llama2-7b-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "rombodawg/LosslessMegaCoder-llama2-7b-mini", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "rombodawg/LosslessMegaCoder-llama2-7b-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "rombodawg/LosslessMegaCoder-llama2-7b-mini", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use rombodawg/LosslessMegaCoder-llama2-7b-mini with Docker Model Runner:
docker model run hf.co/rombodawg/LosslessMegaCoder-llama2-7b-mini
- Please note this model was not trained on the rombodawg/LosslessMegaCodeTrainingV3_MINI dataset, despite the name similarity. You can find the training data at the bottom of the model card labeled (megacode2-min100)
This is one of the first models trained on the LosslessMegaCodeTrainingV2_1m_Evol_Uncensored dataset. The version of the dataset used for this model was filtered by removed any data with less than 100 tokens but plans for much more refined filtering are in the works
- This model was made as a colaboration between me and andreaskoepf who is an affiliate of Open Assistant.
This model is extremely good at coding, and might be one of the best coding models for its size and much better than any 7b parameter model. Plans for bigger models are coming in the future.
Prompt template
chatml format is used: "<|im_start|>system\n{system message}<|im_end|>\n<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n"
multi-line:
<|im_start|>system
{system message}<|im_end|>
<|im_start|>user
{user prompt}<|im_end|>
<|im_start|>assistant
{Assistant answer}<|im_end|>
Gpt4all template:
- System prompt
<|im_start|>system
"Below is an instruction that describes a task. Write a response that appropriately completes the request."
- Prompt template
<|im_end|>
<|im_start|>user
"%1"<|im_end|>
<|im_start|>assistant
Oobagooba Text-Generation-Webui Template
- user:
<|im_start|>user
{User string}<|im_end|>
- bot:
<|im_start|>assistant
{Bot string}<|im_end|>
- turn_template:
<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n
- context:
<|im_start|>system
Below is an instruction that describes a task. Write a response that appropriately completes the request.<|im_end|>
Current quatizations available:
Benchmarks for the model can be found at the link bellow the model here is called (andreaskoepf/llama2-7b-megacode2_min100)
Sampling report:
Training information:
The link for the full dataset is bellow:
Link for the filtered dataset used to make this model are bellow:
The original posting for this model was uploaded at the link bellow.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 45.33 |
| ARC (25-shot) | 53.5 |
| HellaSwag (10-shot) | 77.38 |
| MMLU (5-shot) | 49.72 |
| TruthfulQA (0-shot) | 45.77 |
| Winogrande (5-shot) | 74.03 |
| GSM8K (5-shot) | 9.55 |
| DROP (3-shot) | 7.34 |
- Downloads last month
- 1,022