Text Generation
Transformers
Safetensors
English
Chinese
bailing_moe
code
Mixture of Experts
custom_code
Instructions to use inclusionAI/Ling-Coder-lite-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use inclusionAI/Ling-Coder-lite-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="inclusionAI/Ling-Coder-lite-base", trust_remote_code=True)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("inclusionAI/Ling-Coder-lite-base", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use inclusionAI/Ling-Coder-lite-base with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "inclusionAI/Ling-Coder-lite-base" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "inclusionAI/Ling-Coder-lite-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/inclusionAI/Ling-Coder-lite-base
- SGLang
How to use inclusionAI/Ling-Coder-lite-base with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "inclusionAI/Ling-Coder-lite-base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "inclusionAI/Ling-Coder-lite-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "inclusionAI/Ling-Coder-lite-base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "inclusionAI/Ling-Coder-lite-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use inclusionAI/Ling-Coder-lite-base with Docker Model Runner:
docker model run hf.co/inclusionAI/Ling-Coder-lite-base
Add link to paper and GitHub repository
Browse filesThis PR adds a link to the paper in the introduction and to the GitHub repository to the model card, so people can easily navigate to the paper for more info on the model and code.
README.md
CHANGED
|
@@ -1,16 +1,17 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
datasets:
|
| 4 |
- inclusionAI/Ling-Coder-SyntheticQA
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
- zh
|
| 8 |
-
pipeline_tag: text-generation
|
| 9 |
library_name: transformers
|
|
|
|
|
|
|
| 10 |
tags:
|
| 11 |
- code
|
| 12 |
- moe
|
| 13 |
---
|
|
|
|
| 14 |
# Ling-Coder-lite-base
|
| 15 |
|
| 16 |
<p align="center">
|
|
@@ -25,7 +26,7 @@ tags:
|
|
| 25 |
|
| 26 |
## Introduction
|
| 27 |
|
| 28 |
-
Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8 billion parameters with 2.75 billion activated parameters. Ling-Coder-Lite performs impressively on coding tasks compared to existing models in the industry. Specifically, Ling-Coder-Lite further pre-training from an intermediate checkpoint of Ling-Lite, incorporating an additional 3 trillion tokens. This extended pre-training significantly boosts the coding abilities of Ling-Lite, while preserving its strong performance in general language tasks.
|
| 29 |
|
| 30 |
## Model Downloads
|
| 31 |
|
|
@@ -105,4 +106,4 @@ This code repository is licensed under [the MIT License](https://huggingface.co/
|
|
| 105 |
primaryClass={cs.LG},
|
| 106 |
url={https://arxiv.org/abs/2503.17793},
|
| 107 |
}
|
| 108 |
-
```
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
datasets:
|
| 3 |
- inclusionAI/Ling-Coder-SyntheticQA
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
- zh
|
|
|
|
| 7 |
library_name: transformers
|
| 8 |
+
license: mit
|
| 9 |
+
pipeline_tag: text-generation
|
| 10 |
tags:
|
| 11 |
- code
|
| 12 |
- moe
|
| 13 |
---
|
| 14 |
+
|
| 15 |
# Ling-Coder-lite-base
|
| 16 |
|
| 17 |
<p align="center">
|
|
|
|
| 26 |
|
| 27 |
## Introduction
|
| 28 |
|
| 29 |
+
Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8 billion parameters with 2.75 billion activated parameters. Ling-Coder-Lite performs impressively on coding tasks compared to existing models in the industry. Specifically, Ling-Coder-Lite further pre-training from an intermediate checkpoint of Ling-Lite, incorporating an additional 3 trillion tokens. This extended pre-training significantly boosts the coding abilities of Ling-Lite, while preserving its strong performance in general language tasks. This model is described in the paper [Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM](https://huggingface.co/papers/2503.17793).
|
| 30 |
|
| 31 |
## Model Downloads
|
| 32 |
|
|
|
|
| 106 |
primaryClass={cs.LG},
|
| 107 |
url={https://arxiv.org/abs/2503.17793},
|
| 108 |
}
|
| 109 |
+
```
|