(์ฃผ)๋ฏธ๋์ด๊ทธ๋ฃน์ฌ๋๊ณผ์ฒ๊ณผ (์ฃผ)๋ง์ปค์ LLM ์ฐ๊ตฌ ์ปจ์์์์์ ๊ฐ๋ฐ๋ ๋ชจ๋ธ์
๋๋ค
The license is cc-by-nc-sa-4.0.
๐Dear_My_best_Friends-13B๐
The main image is generated image using playground AI.
Model Details
Model Developers Seungyoo Lee (DopeorNope)
Input Models input text only.
Output Models generate text only.
Model Architecture
Dear_My_best_Friends-13B is an auto-regressive 13B language model based on the LLaMA2 transformer architecture.
Base Model DopeorNope/COKAL_pre_DPO_Test_v3-13b- not uploaded yet
COKAL_pre_DPO_Test_v3-13b is the SFT model to train the DPO method.
Training Dataset
- DPO training dataset: DopeorNope/DPO-Ko-Dataset - private
This dataset was constructed by directly collecting and reorganizing data by DopeorNope, obtaining insights from "lvwerra/stack-exchange-paired" to create a paired dataset. (It means I do not use stack-exchange-paired; I just got an insight from it.)
- SFT training dataset: DopeorNope/New_Data_Technology - private
This dataset is based on "HumanF-MarkrAI's private data" and has been processed using the Near Dedup algorithm to remove items with a Jaccard Similarity threshold of 0.8 or higher. In addition, inconsistent inputs have been cleaned and modified. Moreover, I implemented a new method(It is a test version, and I will share it soon).
Training
I developed the model in an environment with four RTX 3090 GPUs running Ubuntu 18.04.
It seems that when uploading the model directly to a repository from a Linux server, there may be an issue causing the model to appear to have more parameters. However, this model is based on a 13B architecture.
Implementation Code
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "DopeorNope/Dear_My_best_Friends-13B"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)
- Downloads last month
- 4
