--- license: apache-2.0 --- # omega-coder-phi-3-mini-1K-5-e omega-coder-phi-3-mini-1K-5-e is an SFT fine-tuned version of microsoft/Phi-3-mini-4k-instruct using a custom training dataset. This model was made with [Phinetune]() ## Process - Learning Rate: 2e-05 - Maximum Sequence Length: 2048 - Dataset: deepmind/code_contests - Split: train[:5%] ## 💻 Usage ```python !pip install -qU transformers from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline model = "samuelswandi/omega-coder-phi-3-mini-1K-5-e" tokenizer = AutoTokenizer.from_pretrained(model) # Example prompt prompt = "Your example prompt here" # Generate a response model = AutoModelForCausalLM.from_pretrained(model) pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) outputs = pipeline(prompt, max_length=50, num_return_sequences=1) print(outputs[0]["generated_text"]) ```