seona21's picture
Create README.md
5e67be5 verified
|
raw
history blame
1.52 kB
metadata
tags:
  - text-generation
license: cc-by-nc-4.0
language:
  - ko
base_model: Edentns/DataVortexS-10.7B-dpo-v1.11
pipeline_tag: text-generation

Model Details

Base Model

Edentns/DataVortexS-10.7B-dpo-v1.11

Trained On

  • GPU: A100 80GB 8ea

Instruction format

It follows Alpaca (Chat) format.

Implementation Code

This model contains the chat_template instruction format.
You can use the code below.

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("Raphael21/Raphael21-SOLAR-10.7B")
tokenizer = AutoTokenizer.from_pretrained("Raphael21/Raphael21-SOLAR-10.7B")

messages = [
    {"role": "system", "content": "๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค."},
    {"role": "user", "content": "์ด์ˆœ์‹  ์žฅ๊ตฐ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ด์ค˜"},
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

License

This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.