File size: 2,369 Bytes
af7ede9 7c26677 af7ede9 49b7b72 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
library_name: transformers
license: mit
---
# Chocolatine-Fusion-14B
**FINGU-AI/Chocolatine-Fusion-14B** is a merged model combining **jpacifico/Chocolatine-2-14B-Instruct-v2.0b3** and **jpacifico/Chocolatine-2-14B-Instruct-v2.0b2**. This model maintains the strengths of Chocolatine while benefiting from an optimized fusion for improved reasoning and multi-turn conversation capabilities.
## **Training & Fine-Tuning**
Chocolatine-Fusion-14B is based on **DPO fine-tuning** from the Chocolatine-2 series, which originated as a fine-tuned version of **sometimesanotion/Lamarck-14B-v0.7**.
- The model has been trained using **French and English RLHF datasets** (including jpacifico/french-orca-dpo-pairs-revised) for enhanced bilingual capabilities.
- Long-context support has been extended up to **128K tokens** with the ability to generate up to **8K tokens**.
## **OpenLLM Leaderboard**
Coming soon.
## **MT-Bench**
Coming soon.
## **Usage**
You can run this model using the following code:
```python
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained("FINGU-AI/Chocolatine-Fusion-14B")
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model="FINGU-AI/Chocolatine-Fusion-14B",
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
```
## **Limitations**
Chocolatine-Fusion-14B is a **demonstration of model merging techniques** rather than a standalone fine-tuned model.
- It does **not** have any built-in moderation mechanisms.
- Responses may vary based on the interaction and prompt style.
- Performance on **highly technical or domain-specific queries** may require further fine-tuning.
## **Developed by**
- **Author:** FINGU-AI, 2025
- **Base Models:** jpacifico/Chocolatine-2-14B-Instruct-v2.0b3, jpacifico/Chocolatine-2-14B-Instruct-v2.0b2
- **Language(s):** French, English
- **Model Type:** Merged LLM
- **License:** Apache-2.0 |