File size: 3,720 Bytes
eea1314
 
 
 
 
f4321ba
 
 
 
 
 
4c01491
 
f4321ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4c01491
 
f4321ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4c01491
 
 
 
 
 
 
 
 
f4321ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eea1314
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
metrics:
- accuracy
- code_eval
---
# Model Card for Evaluate360M

## Model Details

### Model Description

Evaluate360M is a lightweight large language model optimized for reasoning tasks. It is designed to run efficiently on low-end commercial hardware, such as mobile phones, while maintaining strong performance in logical reasoning and general-purpose applications.

- **Developed by:** [More Information Needed]  
- **Funded by [optional]:** [More Information Needed]  
- **Shared by [optional]:** [More Information Needed]  
- **Model type:** Transformer-based decoder model  
- **Language(s) (NLP):** English  
- **License:** [More Information Needed]  
- **Finetuned from model [optional]:** `HuggingFaceTB/SmolLM2-360M-Instruct`  

### Model Sources

- **Repository:** [More Information Needed]  
- **Paper [optional]:** [More Information Needed]  
- **Demo [optional]:** [More Information Needed]  

## Uses

### Direct Use  
Evaluate360M is intended for general-purpose reasoning tasks and can be used in applications that require lightweight LLMs, such as:  
- Mobile-based AI assistants  
- Low-power embedded systems  
- Edge computing applications  

### Downstream Use  
It can be further fine-tuned for specific domains, including code generation, summarization, or dialogue systems.

### Out-of-Scope Use  
- Not optimized for handling very large context windows  
- Not designed for generating high-fidelity creative text, such as poetry or fiction  

## Bias, Risks, and Limitations  

### Limitations  
- Struggles with handling large context windows.  
- Not evaluated for potential biases yet.  

### Recommendations  
Users should be aware of the model’s limitations in context length and should evaluate its performance for their specific use cases.  

## How to Get Started with the Model  

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "evaluate360m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

inputs = tokenizer("What is the capital of France?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```

## Training Details  

### Training Data  
- **Dataset:** `HuggingFaceH4/Bespoke-Stratos-17k`  
- **Preprocessing:** Token packing enabled (`--packing`), sequence length up to 2048 tokens  

### Training Procedure  
- **Optimizer & Precision:**  
  - `bf16` mixed precision  
  - `gradient_accumulation_steps = 8`  
  - Gradient checkpointing enabled  
- **Hyperparameters:**  
  - Learning rate: `2e-5`  
  - Epochs: `3`  
  - Batch size: `4` (per device, both training and evaluation)  
- **Evaluation & Saving:**  
  - Evaluation every `500` steps  
  - Model checkpoint saved every `1000` steps, keeping a max of `2` checkpoints  

### Compute Infrastructure  
- **Hardware Used:** A100 GPU  
- **Training Time:** 6 hours  

## Evaluation  

- **Benchmarks:** No evaluation conducted yet.  
- **Metrics:** Not available yet.  

## Environmental Impact  

- **Hardware Type:** A100 GPU  
- **Hours Used:** 6 hours  
- **Cloud Provider:** [More Information Needed]  
- **Compute Region:** [More Information Needed]  
- **Carbon Emitted:** [More Information Needed]  

## Technical Specifications  

### Model Architecture  
- Similar to SmolLM2-360M  
- Inspired by MobileLLM  
- Uses **Grouped-Query Attention (GQA)**  
- Prioritizes depth over width  

## Citation [optional]  

**BibTeX:**  
[More Information Needed]  

**APA:**  
[More Information Needed]  

## More Information  

[More Information Needed]  

## Model Card Authors [optional]  

[More Information Needed]  

## Model Card Contact  

[More Information Needed]