internlm-chatbode-7b
O InternLm-ChatBode Γ© um modelo de linguagem ajustado para o idioma portuguΓͺs, desenvolvido a partir do modelo InternLM2. Este modelo foi refinado atravΓ©s do processo de fine-tuning utilizando o dataset UltraAlpaca.
CaracterΓsticas Principais
- Modelo Base: internlm/internlm2-chat-7b
- Dataset para Fine-tuning: UltraAlpaca
- Treinamento: O treinamento foi realizado a partir do fine-tuning, usando QLoRA, do internlm2-chat-7b.
Exemplo de uso
A seguir um exemplo de cΓ³digo de como carregar e utilizar o modelo:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("recogna-nlp/internlm-chatbode-7b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("recogna-nlp/internlm-chatbode-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "OlΓ‘", history=[])
print(response)
response, history = model.chat(tokenizer, "O que Γ© o Teorema de PitΓ‘goras? Me dΓͺ um exemplo", history=history)
print(response)
As respostas podem ser geradas via stream utilizando o mΓ©todo stream_chat:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "recogna-nlp/internlm-chatbode-7b"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "OlΓ‘", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found here and on the π Open Portuguese LLM Leaderboard
| Metric | Value |
|---|---|
| Average | 69.54 |
| ENEM Challenge (No Images) | 63.05 |
| BLUEX (No Images) | 51.46 |
| OAB Exams | 42.32 |
| Assin2 RTE | 91.33 |
| Assin2 STS | 80.69 |
| FaQuAD NLI | 79.80 |
| HateBR Binary | 87.99 |
| PT Hate Speech Binary | 68.09 |
| tweetSentBR | 61.11 |
CitaΓ§Γ£o
Se vocΓͺ deseja utilizar o Chatbode em sua pesquisa, cite-o da seguinte maneira:
@misc {chatbode_2024,
author = { Gabriel Lino Garcia, Pedro Henrique Paiola and and JoΓ£o Paulo Papa},
title = { Chatbode },
year = {2024},
url = { https://huggingface.co/recogna-nlp/internlm-chatbode-7b/ },
doi = { 10.57967/hf/3317 },
publisher = { Hugging Face }
}
- Downloads last month
- 78
Model tree for recogna-nlp/internlm-chatbode-7b
Space using recogna-nlp/internlm-chatbode-7b 1
Evaluation results
- accuracy on ENEM Challenge (No Images)Open Portuguese LLM Leaderboard63.050
- accuracy on BLUEX (No Images)Open Portuguese LLM Leaderboard51.460
- accuracy on OAB ExamsOpen Portuguese LLM Leaderboard42.320
- f1-macro on Assin2 RTEtest set Open Portuguese LLM Leaderboard91.330
- pearson on Assin2 STStest set Open Portuguese LLM Leaderboard80.690
- f1-macro on FaQuAD NLItest set Open Portuguese LLM Leaderboard79.800
- f1-macro on HateBR Binarytest set Open Portuguese LLM Leaderboard87.990
- f1-macro on PT Hate Speech Binarytest set Open Portuguese LLM Leaderboard68.090
- f1-macro on tweetSentBRtest set Open Portuguese LLM Leaderboard61.110