Telugu-LLM-Labs/marathi_alpaca_yahma_cleaned_filtered
Viewer • Updated • 28.9k • 65 • 1
How to use Echelon-AI/marathi-llama3 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Echelon-AI/marathi-llama3")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Echelon-AI/marathi-llama3")
model = AutoModelForCausalLM.from_pretrained("Echelon-AI/marathi-llama3")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use Echelon-AI/marathi-llama3 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Echelon-AI/marathi-llama3"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Echelon-AI/marathi-llama3",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/Echelon-AI/marathi-llama3
How to use Echelon-AI/marathi-llama3 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Echelon-AI/marathi-llama3" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Echelon-AI/marathi-llama3",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Echelon-AI/marathi-llama3" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Echelon-AI/marathi-llama3",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use Echelon-AI/marathi-llama3 with Docker Model Runner:
docker model run hf.co/Echelon-AI/marathi-llama3
Meta llama3 8B trained on marathi alpaca cleaned for 1.5 epochs, On A100 40GB
Marathi-Llama3 is a fine-tuned version of the Llama3 model, tailored specifically for the Marathi language. This model leverages the power of the Llama3 architecture to provide accurate and nuanced responses in Marathi, opening up advanced AI capabilities to Marathi-speaking communities.
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "Echelon-AI/marathi-llama3"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Generate text
input_text = "कृपया मला मराठी भाषेत एक गोष्ट सांगा."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
GGUF available at GGUF