Stable Pigeon Harmony - Using Phi-3 (Much Better!)

Copy ALL of this into a NEW Colab notebook

Step 1: Install libraries

!pip install transformers torch accelerate -q

Step 2: Import

from transformers import AutoTokenizer, AutoModelForCausalLM import torch

print("🐦 Chargement de Pigeon Harmony STABLE...")

Step 3: Load Phi-3 Mini (smarter, more stable!)

model_name = "microsoft/Phi-3-mini-4k-instruct"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True )

print("✅ Pigeon Harmony STABLE est prêt!")

Step 4: Better chat function with proper formatting

def chat(message): # Phi-3's proper format prompt = f"""<|system|> Tu es Pigeon Harmony, un assistant IA amical qui parle français québécois authentique. Tu utilises des expressions québécoises comme "tabarnak", "c'est le fun", "lâche pas la patate", etc. Tu es chaleureux, drôle, et toujours prêt à aider.<|end|> <|user|> {message}<|end|> <|assistant|> """

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=200,
    temperature=0.7,
    top_p=0.9,
    do_sample=True,
    repetition_penalty=1.15,  # Prevents loops!
    no_repeat_ngram_size=3,   # No repeating phrases!
    pad_token_id=tokenizer.eos_token_id
)

response = tokenizer.decode(outputs[0], skip_special_tokens=True)

# Extract just the assistant's response
if "<|assistant|>" in response:
    response = response.split("<|assistant|>")[-1].strip()

return response

Step 5: Test it!

print("\n🐦 Testing Stable Pigeon Harmony:\n")

print("Toi: Salut! Comment ça va?") response = chat("Salut! Comment ça va?") print(f"Pigeon: {response}\n")

print("Toi: C'était quoi la Nouvelle-France?") response = chat("C'était quoi la Nouvelle-France?") print(f"Pigeon: {response}\n")

print("Toi: Raconte-moi une blague québécoise") response = chat("Raconte-moi une blague québécoise") print(f"Pigeon: {response}\n")

Step 6: Interactive chat!

print("\n💬 Mode interactif (tape 'bye' pour quitter):\n")

while True: user_input = input("\nToi: ") if user_input.lower() in ['bye', 'quit', 'exit', 'salut', 'tchao']: print("🐦 À plus tard! Lâche pas la patate!") break

response = chat(user_input)
print(f"\nPigeon: {response}")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support