Instructions to use Mercury7353/PyLlama3 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Mercury7353/PyLlama3 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Mercury7353/PyLlama3") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Mercury7353/PyLlama3") model = AutoModelForCausalLM.from_pretrained("Mercury7353/PyLlama3") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Mercury7353/PyLlama3 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Mercury7353/PyLlama3" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Mercury7353/PyLlama3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Mercury7353/PyLlama3
- SGLang
How to use Mercury7353/PyLlama3 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Mercury7353/PyLlama3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Mercury7353/PyLlama3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Mercury7353/PyLlama3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Mercury7353/PyLlama3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Mercury7353/PyLlama3 with Docker Model Runner:
docker model run hf.co/Mercury7353/PyLlama3
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Mercury7353/PyLlama3")
model = AutoModelForCausalLM.from_pretrained("Mercury7353/PyLlama3")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
PyBench: Evaluate LLM Agent on Real World Coding Tasks
π Paper β’ π€ Data (PyInstruct) β’ π€ Model (PyLlama3) β’ πCode β’
This is the PyLlama3 model, fine-tuned for PyBench .
PyBench is a comprehensive benchmark evaluating LLM on real-world coding tasks including chart analysis, text analysis, image/ audio editing, complex math and software/website development.
We collect files from Kaggle, arXiv, and other sources and automatically generate queries according to the type and content of each file. As for evaluation, we design unit tests for each tasks.
Why PyBench?
The LLM Agent, equipped with a code interpreter, is capable of automatically solving real-world coding tasks, such as data analysis and image processing.
However, existing benchmarks primarily focus on either simplistic tasks, such as completing a few lines of code, or on extremely complex and specific tasks at the repository level, neither of which are representative of various daily coding tasks.
To address this gap, we introduce PyBench, a benchmark that encompasses 5 main categories of real-world tasks, covering more than 10 types of files.
π PyInstruct
To figure out a way to enhance the model's ability on PyBench, we generate a homologous dataset: PyInstruct. The PyInstruct contains multi-turn interaction between the model and files, stimulating the model's capability on coding, debugging and multi-turn complex task solving. Compare to other Datasets focus on multi-turn coding ability, PyInstruct has longer turns and tokens per trajectory.
Dataset Statistics. Token statistics are computed using Llama-2 tokenizer.
πͺ PyLlama
We trained Llama3-8B-base on PyInstruct, CodeActInstruct, CodeFeedback, and Jupyter Notebook Corpus to get PyLlama3, achieving an outstanding performance on PyBench
π Model Evaluation with PyBench!
Demonstration of the chat interface.
- Detailed in πGithub
π LeaderBoard
π Citation
@misc{zhang2024pybenchevaluatingllmagent,
title={PyBench: Evaluating LLM Agent on various real-world coding tasks},
author={Yaolun Zhang and Yinxu Pan and Yudong Wang and Jie Cai and Zhi Zheng and Guoyang Zeng and Zhiyuan Liu},
year={2024},
eprint={2407.16732},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2407.16732},
}
- Downloads last month
- 14


# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Mercury7353/PyLlama3") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)