ericrisco commited on
Commit
75e2339
·
verified ·
1 Parent(s): 304b8b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -6
README.md CHANGED
@@ -10,14 +10,72 @@ tags:
10
  license: apache-2.0
11
  language:
12
  - en
 
 
 
 
13
  ---
 
14
 
15
- # Uploaded model
16
 
17
- - **Developed by:** ericrisco
18
- - **License:** apache-2.0
19
- - **Finetuned from model :** BSC-LT/salamandra-2b-instruct
20
 
21
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  license: apache-2.0
11
  language:
12
  - en
13
+ datasets:
14
+ - ericrisco/gsm8k-translated-catalan
15
+ - ericrisco/gsm8k-translated-spanish
16
+ - openai/gsm8k
17
  ---
18
+ # Salamandra Model Card
19
 
20
+ Salamandra is a highly multilingual model pre-trained from scratch that comes in different sizes. This model card corresponds to the **2B instructed version**, fine-tuned using **GRPO (Group Reward Policy Optimization)** and **Unsloth**.
21
 
22
+ To visit the model cards of other Salamandra versions, please refer to the Model Index.
 
 
23
 
24
+ The entire Salamandra family is released under a permissive Apache 2.0 license. Along with the open weights, all training scripts and configuration files are made publicly available in this GitHub repository.
25
 
26
+ ## Model Details
27
+
28
+ ### Description
29
+ Salamandra-2B is a **reasoning-focused** transformer-based language model fine-tuned with **GRPO**. It has been trained on **high-quality datasets**, including:
30
+
31
+ - **GSM8K (English)**
32
+ - **GSM8K Translated (Spanish)**
33
+ - **GSM8K Translated (Catalan)**
34
+
35
+ This dataset selection allows the model to **reason through complex problems** in multiple languages. Instead of relying on traditional supervised fine-tuning, **GRPO optimizes the model through reward-based reinforcement learning**, making it more adaptive to structured reasoning tasks.
36
+
37
+ ## Intended Use
38
+
39
+ ### Direct Use
40
+ The model is designed as a **reasoning assistant** capable of structured problem-solving across different domains. It can be used for:
41
+ - Logical and mathematical reasoning tasks
42
+ - Multi-step question answering
43
+ - Instruction following in multilingual contexts
44
+
45
+ ### Out-of-scope Use
46
+ The model is not intended for malicious applications or any activity that violates legal or ethical standards.
47
+
48
+ ## How to Use
49
+
50
+ The instruction-following models use the **ChatML template** for structured dialogue formatting:
51
+
52
+ ```python
53
+ from datetime import datetime
54
+ from transformers import AutoTokenizer, AutoModelForCausalLM
55
+ import torch
56
+
57
+ model_id = "ericrisco/salamandra-2b-grpo"
58
+
59
+ text = "At what temperature does water boil?"
60
+
61
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
62
+ model = AutoModelForCausalLM.from_pretrained(
63
+ model_id,
64
+ device_map="auto",
65
+ torch_dtype=torch.bfloat16
66
+ )
67
+
68
+ message = [ { "role": "user", "content": text } ]
69
+ date_string = datetime.today().strftime('%Y-%m-%d')
70
+
71
+ prompt = tokenizer.apply_chat_template(
72
+ message,
73
+ tokenize=False,
74
+ add_generation_prompt=True,
75
+ date_string=date_string
76
+ )
77
+
78
+ inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
79
+ outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=200)
80
+
81
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))