Instructions to use jan-hq/supermario-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jan-hq/supermario-v1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="jan-hq/supermario-v1")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("jan-hq/supermario-v1") model = AutoModelForCausalLM.from_pretrained("jan-hq/supermario-v1") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use jan-hq/supermario-v1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "jan-hq/supermario-v1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jan-hq/supermario-v1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/jan-hq/supermario-v1
- SGLang
How to use jan-hq/supermario-v1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "jan-hq/supermario-v1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jan-hq/supermario-v1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "jan-hq/supermario-v1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jan-hq/supermario-v1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use jan-hq/supermario-v1 with Docker Model Runner:
docker model run hf.co/jan-hq/supermario-v1
Model Description
This model uses the DARE_TIES merge method.
NOTE: Due to the mismatch of architecture between Llama and Mistral, Magicoder-S-CL-7B layers will be skipped
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
merge_method: dare_ties
models:
- model: mistralai/Mistral-7B-v0.1
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
parameters:
density: 0.8
weight: 0.3
- model: Q-bert/MetaMath-Cybertron-Starling
parameters:
density: 0.8
weight: 0.3
- model: ise-uiuc/Magicoder-S-CL-7B
parameters:
density: 0.6
weight: 0.2
- model: AIDC-ai-business/Marcoroni-7B-v3
parameters:
density: 0.6
weight: 0.2
parameters:
int8_mask: true
Run this model
You can run this model using Jan Desktop on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 100% offline on your machine: Your conversations remain confidential, and visible only to you.
- 🗂️ An Open File Format: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 OpenAI Compatible: Local server on port
1337with OpenAI compatible endpoints - 🌍 Open Source & Free: We build in public; check out our Github
About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
Jan Model Merger
This is a test project for merging models.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here.
| Metric | Value |
|---|---|
| Avg. | ? |
| ARC (25-shot) | ? |
| HellaSwag (10-shot) | ? |
| MMLU (5-shot) | ? |
| TruthfulQA (0-shot) | ? |
| Winogrande (5-shot) | ? |
| GSM8K (5-shot) | ? |
Acknowlegement
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 29.49 |
| AI2 Reasoning Challenge (25-Shot) | 27.73 |
| HellaSwag (10-Shot) | 25.83 |
| MMLU (5-Shot) | 27.04 |
| TruthfulQA (0-shot) | 47.27 |
| Winogrande (5-shot) | 49.09 |
| GSM8k (5-shot) | 0.00 |
- Downloads last month
- 53
Model tree for jan-hq/supermario-v1
Collection including jan-hq/supermario-v1
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard27.730
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard25.830
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard27.040
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard47.270
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard49.090
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard0.000
