Add sample usage and license to model card
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,16 +1,17 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- pt
|
|
|
|
| 4 |
metrics:
|
| 5 |
- accuracy
|
| 6 |
-
base_model:
|
| 7 |
-
- mistralai/Mistral-7B-v0.3
|
| 8 |
pipeline_tag: text-generation
|
| 9 |
-
library_name: transformers
|
| 10 |
tags:
|
| 11 |
- legal
|
| 12 |
- portuguese
|
| 13 |
- Brazil
|
|
|
|
| 14 |
---
|
| 15 |
|
| 16 |
# Juru: Legal Brazilian Large Language Model from Reputable Sources
|
|
@@ -19,12 +20,35 @@ This repository hosts the public checkpoints for **Juru-7B**, a Mistral-7B speci
|
|
| 19 |
|
| 20 |
## Checkpoints
|
| 21 |
|
| 22 |
-
*
|
| 23 |
-
*
|
| 24 |
-
*
|
| 25 |
|
| 26 |
> **Note:** The model has **not** been instruction finetuned. For best results, use few-shot inference or perform additional finetuning on your specific task.
|
| 27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
## Citation information
|
| 29 |
|
| 30 |
```bibtex
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- mistralai/Mistral-7B-v0.3
|
| 4 |
language:
|
| 5 |
- pt
|
| 6 |
+
library_name: transformers
|
| 7 |
metrics:
|
| 8 |
- accuracy
|
|
|
|
|
|
|
| 9 |
pipeline_tag: text-generation
|
|
|
|
| 10 |
tags:
|
| 11 |
- legal
|
| 12 |
- portuguese
|
| 13 |
- Brazil
|
| 14 |
+
license: cc-by-4.0
|
| 15 |
---
|
| 16 |
|
| 17 |
# Juru: Legal Brazilian Large Language Model from Reputable Sources
|
|
|
|
| 20 |
|
| 21 |
## Checkpoints
|
| 22 |
|
| 23 |
+
* Checkpoints were saved every **200** optimization steps up to step **3,800**.
|
| 24 |
+
* Each 200 step interval adds **~0.4 billion** tokens of continued pretraining.
|
| 25 |
+
* We refer to **Juru-7B** as checkpoint **3,400** (~7.1 billion tokens), which achieved the best score on our Brazilian legal knowledge benchmarks.
|
| 26 |
|
| 27 |
> **Note:** The model has **not** been instruction finetuned. For best results, use few-shot inference or perform additional finetuning on your specific task.
|
| 28 |
|
| 29 |
+
## Usage
|
| 30 |
+
|
| 31 |
+
You can use the model with the `transformers` library:
|
| 32 |
+
|
| 33 |
+
```python
|
| 34 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 35 |
+
import torch
|
| 36 |
+
|
| 37 |
+
model_id = "juru-llm/Juru-7B"
|
| 38 |
+
|
| 39 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 40 |
+
# Ensure to use appropriate dtype for large models, e.g., torch.bfloat16 or torch.float16
|
| 41 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
|
| 42 |
+
|
| 43 |
+
prompt = "Qual é o prazo para apresentação de defesa em um processo judicial no Brasil?"
|
| 44 |
+
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
|
| 45 |
+
|
| 46 |
+
# Generate response
|
| 47 |
+
# Adjust generation parameters like max_new_tokens, do_sample, top_p, temperature as needed.
|
| 48 |
+
outputs = model.generate(**input_ids, max_new_tokens=100, do_sample=True, top_p=0.9, temperature=0.7)
|
| 49 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
## Citation information
|
| 53 |
|
| 54 |
```bibtex
|