| | --- |
| | library_name: transformers |
| | datasets: |
| | - nvidia/Aegis-AI-Content-Safety-Dataset-2.0 |
| | language: |
| | - en |
| | base_model: |
| | - openai-community/gpt2 |
| | --- |
| | |
| | ### How to use |
| |
|
| | ```py |
| | from transformers import AutoTokenizer |
| | from trl import AutoModelForCausalLMWithValueHead |
| | import torch |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("entfane/gpt2_constitutional_classifier_with_value_head") |
| | model = AutoModelForCausalLMWithValueHead.from_pretrained("entfane/gpt2_constitutional_classifier_with_value_head", device_map = "cuda") |
| | |
| | messages = [{"role":"system", "content": ""}, |
| | {"role":"user", "content": "How are you doing?"}, |
| | {"role":"assistant", "content": "I am good"}] |
| | |
| | input = tokenizer.apply_chat_template(messages, tokenize = True, return_tensors = "pt").to('cuda') |
| | _, _, values = model(**input) |
| | print(torch.sigmoid(values)) |
| | ``` |