Instructions to use vitormesaque/i-llama with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use vitormesaque/i-llama with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="vitormesaque/i-llama")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("vitormesaque/i-llama") model = AutoModelForCausalLM.from_pretrained("vitormesaque/i-llama") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use vitormesaque/i-llama with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "vitormesaque/i-llama" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "vitormesaque/i-llama", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/vitormesaque/i-llama
- SGLang
How to use vitormesaque/i-llama with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "vitormesaque/i-llama" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "vitormesaque/i-llama", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "vitormesaque/i-llama" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "vitormesaque/i-llama", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Unsloth Studio new
How to use vitormesaque/i-llama with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for vitormesaque/i-llama to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for vitormesaque/i-llama to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for vitormesaque/i-llama to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="vitormesaque/i-llama", max_seq_length=2048, ) - Docker Model Runner
How to use vitormesaque/i-llama with Docker Model Runner:
docker model run hf.co/vitormesaque/i-llama
{}
iLLAMA: LLM for App Issue Detection and Prioritization Obtained by Fine-Tuning LLAMA 3
This repository contains a fine-tuned version of LLAMA 3 using the Unsloth framework and the vitormesaque/irisk dataset. The model is designed for detecting issues in text data.
Model Details
- Developed by: Vitor Mesaque
- Model type: App Issue Detection Model
- Language: English
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
- Datasets: vitormesaque/irisk
The vitormesaque/irisk dataset was obtained through the knowledge base of the MApp-IDEA research project.
Model Usage
How to Get Started with the Model
Use the code below to get started with the model:
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("vitormesaque/i-llama")
model = AutoModelForCausalLM.from_pretrained("vitormesaque/i-llama")
Usage
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
irisk_prompt.format(
"Extract issues from the user review in JSON format. For each issue, provide label, functionality, severity (1-5), likelihood (1-5), category (Bug, User Experience, Performance, Security, Compatibility, Functionality, UI, Connectivity, Localization, Accessibility, Data Handling, Privacy, Notifications, Account Management, Payment, Content Quality, Support, Updates, Syncing, Customization), and the sentence.", # instruction
"I used to love this app, but now it's become frustrating as hell. We can't see lyrics, we can't CHOOSE WHAT SONG WE WANT TO LISTEN TO, we can't skip a song more than a few times, there are ads after every two songs, and all in all it's a horrible overrated app. If I could give this 0 stars, I would.", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)
Evaluation
The model was evaluated using a separate portion of the vitormesaque/irisk dataset.
Bias, Risks, and Limitations
While the model is effective in detecting issues, it may exhibit biases present in the training data. Users should be aware of these potential biases and consider them when interpreting results.
Recommendations
Users should conduct additional evaluations in the specific context of use to ensure reliability and fairness.
Citation
If you use this model in your research, please cite it as follows:
BibTeX:
@misc{vitormesaque2024llama3,
author = {Vitor Mesaque Alves de Lima},
title = {iLLAMA: LLM for App Issue Detection and Prioritization Obtained by Fine-Tuning LLAMA 3},
year = {2024},
url = {https://huggingface.co/vitormesaque}
}
APA:
Mesaque, V. (2024). LLAMA 3 fine-tuned with Unsloth and vitormesaque/irisk dataset. Retrieved from https://huggingface.co/vitormesaque
License
This model is licensed under the MIT License.
Contact
For questions or comments, please contact Vitor Mesaque.
- Downloads last month
- 1
Model tree for vitormesaque/i-llama
Base model
meta-llama/Meta-Llama-3-8B