Text Generation
Transformers
Safetensors
English
qwen2
code
chat
conversational
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Vedika35/Vedika_coder")
model = AutoModelForCausalLM.from_pretrained("Vedika35/Vedika_coder")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
Vedika Coder
Introduction
Vedika Coder is the latest series of Code-Specific Vedika Coder language models (formerly known as Code Vedika). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Vedika Coder, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Vedika Coder has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- Downloads last month
- 249
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Vedika35/Vedika_coder") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)