Text Generation
Transformers
Safetensors
Chinese
English
qwen3
qwen
scoring
grading
evaluation
llm-judge
conversational
text-generation-inference
Instructions to use blue-tundra-42/code_and_model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use blue-tundra-42/code_and_model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="blue-tundra-42/code_and_model") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("blue-tundra-42/code_and_model") model = AutoModelForCausalLM.from_pretrained("blue-tundra-42/code_and_model") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use blue-tundra-42/code_and_model with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "blue-tundra-42/code_and_model" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "blue-tundra-42/code_and_model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/blue-tundra-42/code_and_model
- SGLang
How to use blue-tundra-42/code_and_model with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "blue-tundra-42/code_and_model" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "blue-tundra-42/code_and_model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "blue-tundra-42/code_and_model" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "blue-tundra-42/code_and_model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use blue-tundra-42/code_and_model with Docker Model Runner:
docker model run hf.co/blue-tundra-42/code_and_model
File size: 3,769 Bytes
f1f682e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | import re
from dataclasses import dataclass, field
from typing import List, Dict, Any, Optional
def remove_thought_block(text: str) -> str:
pattern = r"^(<think>.*?</think>|.*?)"
match = re.match(pattern, text, flags=re.DOTALL)
if match:
end_of_match = match.end()
return text[end_of_match:].lstrip()
return text
def process_score_prompt(question, reference, response):
promt_template = """请先通读问题信息,然后基于参考答案对模型回复的结果进行正确性打分。每道题可能包含多个小问,每个小问都已给出了相应的参考答案和分值,请逐小问校验模型回复是否正确,正确得对应分值,错误或漏答得0分,累计计分,有如下要求。
---
### 要求1:信息梳理
- 梳理出如下信息
- 问题内容
- 参考答案(可适度完善表达,但不改变核心内容)
- 模型回复(需要将模型回复中的指代关系与参考答案对齐)
- 分值
### 要求2:判断题型
- 明确该小问属于以下哪种题型之一,并基于该类型的打分标准进行打分,需要给出详细的比对过程。
- **数值型**,要求模型回复与标准答案的数值完全相同,不允许有误差。例,`问题:北京奥运会是哪一年?参考答案:2008,模型回复:2004,打分结果:错误。`
- **枚举型**,要求模型回复列举出参考答案的全部对象,缺一不可、错一不可,允许同义词等语义相近的表达,题中有顺序要求则必须按顺序枚举。例,`图中出现了哪些动物?参考答案:大熊猫、河马、长颈鹿,模型回复:河马、小熊猫、长颈鹿,打分结果:错误。 `注:“/”表示“或”,如,XXA/XXB,表示回答出任意一项即可。
- **选择题**,要求模型回复与参考答案相同的选项或选项内容。例,`问题:李白是哪个朝代的诗人?A. 唐朝 B. 宋朝 C. 元朝,模型回复:李白是唐朝诗人,打分结果:正确。`
- **判断题**,要求模型回复与参考答案的判断一致。例,`问题:图中鼠标是否放在了笔记本电脑左侧?参考答案:是,模型回复:图中鼠标在笔记本电脑的左侧。打分结果:正确。`
- **简答题**,要求模型回复包括与参考答案语义一致的短语或表达,允许表达方式不同。例,`问题:视频中最后放入锅中的食材是什么?参考答案:洋葱,模型回复:胡萝卜。打分结果:错误。`
- **论述题**,要求模型回复包含参考答案的核心观点。例,`问题:请简要论述为什么要保护生物多样性。参考答案:维持生态平衡,模型回复:保护生物多样性能够让生态系统保持稳定,促进人类社会的可持续发展。打分结果:正确。`
### 要求3:打分标准
- **完全正确**:得满分。
- **错误或漏答**:得0分。
- 如模型回复与参考答案大意相同但细节略有差别,且非核心内容,视为正确,具体参考参考答案的详细要求。
- 若模型回复未直接给出答案,需主动归纳总结结论,只关注结论是否一致。
- 每小问独立打分,前序错误不影响后续小问的结果。
### 要求4:输出格式
- 逐小问列出得分说明。
- 所有小问得分相加,在<score></score>中给出总分,例如:<score>5</score>
---
## 问题信息
{{question}}
## 参考答案
{{reference}}
## 模型回复
{{response}}
## 逐小问打分"""
prompt = promt_template.replace("{{question}}", remove_thought_block(question.strip()))
prompt = prompt.replace("{{reference}}", reference)
prompt = prompt.replace("{{response}}", response)
return prompt |