πŸˆβ€β¬› Mistral-7B-v0.1-flashback-v2-instruct

Mistral-7B-v0.1-flashback-v2-instruct is an instruct based version of the base model timpal0l/Mistral-7B-v0.1-flashback-v2. It has been finetuned on a the machine translated instruct dataset OpenHermes2.5.

How to use:

from transformers import pipeline

pipe = pipeline(
    "text-generation",
    "timpal0l/Mistral-7B-v0.1-flashback-v2-instruct",
    device_map="auto"
)

text = """
Hur mΓ₯nga Γ€gg har jag? Jag hade 10 Γ€gg, sen gav jag bort 5 Γ€gg.
Sen fick jag 3 Γ€gg av en kompis.
"""

generated = pipe(f"USER:{text}ASSISTANT:", max_length=512, temperature=0.6)
print(generated[0]["generated_text"].split("ASSISTANT: ")[1:][0])

Output:

Du har 8 Γ€gg. HΓ€r Γ€r resonemanget:
1. Du bΓΆrjar med 10 Γ€gg
2. Du ger bort 5 Γ€gg, vilket lΓ€mnar dig med 10 - 5 = 5 Γ€gg
3. Sedan fΓ₯r du 3 Γ€gg av en kompis, vilket gΓΆr att du har 5 + 3 = 8 Γ€gg.
Downloads last month
14
Safetensors
Model size
7B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for timpal0l/Mistral-7B-v0.1-flashback-v2-instruct

Merges
6 models
Quantizations
3 models

Datasets used to train timpal0l/Mistral-7B-v0.1-flashback-v2-instruct

Spaces using timpal0l/Mistral-7B-v0.1-flashback-v2-instruct 14