| license: mit | |
| # mattxt-phi3 | |
| mattxt-phi3 is an SFT fine-tuned version of microsoft/Phi-3-mini-4k-instruct using a custom training dataset. | |
| This model was made with [Phinetune]() | |
| ## Process | |
| - Learning Rate: 0.0001 | |
| - Maximum Sequence Length: 2048 | |
| - Dataset: Striker-7/mattxt | |
| - Split: train | |
| ## 💻 Usage | |
| ```python | |
| !pip install -qU transformers | |
| from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline | |
| model = "Striker-7/mattxt-phi3" | |
| tokenizer = AutoTokenizer.from_pretrained(model) | |
| # Example prompt | |
| prompt = "Your example prompt here" | |
| # Generate a response | |
| model = AutoModelForCausalLM.from_pretrained(model) | |
| pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) | |
| outputs = pipeline(prompt, max_length=50, num_return_sequences=1) | |
| print(outputs[0]["generated_text"]) | |
| ``` |