Text Generation
Transformers
Safetensors
qwen3
llama-factory
full
Generated from Trainer
conversational
text-generation-inference
Instructions to use DCAgent/a1-magicoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DCAgent/a1-magicoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DCAgent/a1-magicoder") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("DCAgent/a1-magicoder") model = AutoModelForCausalLM.from_pretrained("DCAgent/a1-magicoder") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use DCAgent/a1-magicoder with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "DCAgent/a1-magicoder" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DCAgent/a1-magicoder", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/DCAgent/a1-magicoder
- SGLang
How to use DCAgent/a1-magicoder with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "DCAgent/a1-magicoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DCAgent/a1-magicoder", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "DCAgent/a1-magicoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DCAgent/a1-magicoder", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use DCAgent/a1-magicoder with Docker Model Runner:
docker model run hf.co/DCAgent/a1-magicoder
Upload folder using huggingface_hub
Browse files- all_results.json +9 -9
- model-00001-of-00004.safetensors +1 -1
- model-00002-of-00004.safetensors +1 -1
- model-00003-of-00004.safetensors +1 -1
- model-00004-of-00004.safetensors +1 -1
- train_results.json +9 -9
- trainer_log.jsonl +0 -0
- training_loss.png +0 -0
all_results.json
CHANGED
|
@@ -1,16 +1,16 @@
|
|
| 1 |
{
|
| 2 |
-
"achieved_tflops_per_gpu": 0.
|
| 3 |
-
"achieved_tflops_per_gpu_theoretical":
|
| 4 |
"epoch": 7.0,
|
| 5 |
"loss_nan_ranks": 0,
|
| 6 |
-
"loss_rank_avg": 0.
|
| 7 |
-
"mfu_percent": 0.
|
| 8 |
-
"mfu_percent_theoretical": 45.
|
| 9 |
"total_flos": 688651439570944.0,
|
| 10 |
-
"train_loss": 0.
|
| 11 |
-
"train_runtime":
|
| 12 |
-
"train_samples_per_second": 3.
|
| 13 |
-
"train_steps_per_second": 0.
|
| 14 |
"valid_targets_mean": 4082.4,
|
| 15 |
"valid_targets_min": 866
|
| 16 |
}
|
|
|
|
| 1 |
{
|
| 2 |
+
"achieved_tflops_per_gpu": 0.002083037344637663,
|
| 3 |
+
"achieved_tflops_per_gpu_theoretical": 640.3139623735788,
|
| 4 |
"epoch": 7.0,
|
| 5 |
"loss_nan_ranks": 0,
|
| 6 |
+
"loss_rank_avg": 0.2396668791770935,
|
| 7 |
+
"mfu_percent": 0.000147211119762379,
|
| 8 |
+
"mfu_percent_theoretical": 45.25187013240839,
|
| 9 |
"total_flos": 688651439570944.0,
|
| 10 |
+
"train_loss": 0.26112440031201767,
|
| 11 |
+
"train_runtime": 20662.4788,
|
| 12 |
+
"train_samples_per_second": 3.025,
|
| 13 |
+
"train_steps_per_second": 0.189,
|
| 14 |
"valid_targets_mean": 4082.4,
|
| 15 |
"valid_targets_min": 866
|
| 16 |
}
|
model-00001-of-00004.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4902257696
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:47f3aa434710f7fda3f3fa23057b2e062a6aa19ca2d59a9b13e40fdd97429883
|
| 3 |
size 4902257696
|
model-00002-of-00004.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4915960368
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b1a0fd930bff3c69df9149615c3cf4e670eed5c7a0c04e8c233a8bab64be4fac
|
| 3 |
size 4915960368
|
model-00003-of-00004.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4983068496
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dca3424fb65168ebb5b237c408c5676171d377664a4284a3e8789d97f07f9644
|
| 3 |
size 4983068496
|
model-00004-of-00004.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1580230264
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4ee02a580f9fe64b849fa281316175c784a52a8823b4120b1f3a7330abd09598
|
| 3 |
size 1580230264
|
train_results.json
CHANGED
|
@@ -1,16 +1,16 @@
|
|
| 1 |
{
|
| 2 |
-
"achieved_tflops_per_gpu": 0.
|
| 3 |
-
"achieved_tflops_per_gpu_theoretical":
|
| 4 |
"epoch": 7.0,
|
| 5 |
"loss_nan_ranks": 0,
|
| 6 |
-
"loss_rank_avg": 0.
|
| 7 |
-
"mfu_percent": 0.
|
| 8 |
-
"mfu_percent_theoretical": 45.
|
| 9 |
"total_flos": 688651439570944.0,
|
| 10 |
-
"train_loss": 0.
|
| 11 |
-
"train_runtime":
|
| 12 |
-
"train_samples_per_second": 3.
|
| 13 |
-
"train_steps_per_second": 0.
|
| 14 |
"valid_targets_mean": 4082.4,
|
| 15 |
"valid_targets_min": 866
|
| 16 |
}
|
|
|
|
| 1 |
{
|
| 2 |
+
"achieved_tflops_per_gpu": 0.002083037344637663,
|
| 3 |
+
"achieved_tflops_per_gpu_theoretical": 640.3139623735788,
|
| 4 |
"epoch": 7.0,
|
| 5 |
"loss_nan_ranks": 0,
|
| 6 |
+
"loss_rank_avg": 0.2396668791770935,
|
| 7 |
+
"mfu_percent": 0.000147211119762379,
|
| 8 |
+
"mfu_percent_theoretical": 45.25187013240839,
|
| 9 |
"total_flos": 688651439570944.0,
|
| 10 |
+
"train_loss": 0.26112440031201767,
|
| 11 |
+
"train_runtime": 20662.4788,
|
| 12 |
+
"train_samples_per_second": 3.025,
|
| 13 |
+
"train_steps_per_second": 0.189,
|
| 14 |
"valid_targets_mean": 4082.4,
|
| 15 |
"valid_targets_min": 866
|
| 16 |
}
|
trainer_log.jsonl
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
training_loss.png
CHANGED
|
|