Cglowacki commited on
Commit
fc1f5f5
·
verified ·
1 Parent(s): c020677

Update README.md with corrected model usage plus some code snippets

Browse files
Files changed (1) hide show
  1. README.md +52 -4
README.md CHANGED
@@ -97,12 +97,12 @@ Elenky is designed for philosophical dialogue and reflective questioning, especi
97
  To load the model using the Hugging Face Transformers library:
98
 
99
  ```python
100
- from transformers import AutoModelForCausalLM, AutoTokenizer
101
 
102
- model_name = "cg2wc/elenky"
103
 
104
- tokenizer = AutoTokenizer.from_pretrained(model_name)
105
- model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
106
 
107
  USER_PROMPT = "2 + 2 = 4"
108
  prompt = f"""<|system|>
@@ -121,6 +121,54 @@ print(response)
121
 
122
  Make sure to manage conversation history by prepending previous turns as the dialogue continues.
123
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
  ## 📝 Prompt Format
125
 
126
  Elenky uses a custom prompting strategy that leverages elements of chain-of-thought (CoT) prompting. Each interaction tells the model to act as a philosophy expert committed to open inquiry. Rather than aiming for quick answers, it’s guided to ask probing questions, reveal contradictions, and deepen the conversation.
 
97
  To load the model using the Hugging Face Transformers library:
98
 
99
  ```python
100
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
101
 
102
+ model_name = "Cglowacki/elenky"
103
 
104
+ tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")
105
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
106
 
107
  USER_PROMPT = "2 + 2 = 4"
108
  prompt = f"""<|system|>
 
121
 
122
  Make sure to manage conversation history by prepending previous turns as the dialogue continues.
123
 
124
+ ### Helpful Code
125
+
126
+ #### Extracting a single turn from output
127
+
128
+ Below is example code for extracting a single assistant response from the model output:
129
+
130
+ ```python
131
+ def extract_template_response(response: str, prompt: str):
132
+ single_turn_output = ""
133
+ new_content = response[len(prompt):]
134
+ user_token_idx = new_content.find("<|user|>")
135
+ if user_token_idx != -1:
136
+ single_turn_output = new_content[:user_token_idx]
137
+ else:
138
+ single_turn_output = new_content
139
+ return single_turn_output
140
+ ```
141
+
142
+ #### Generating a model prompt from a list
143
+
144
+ Below is example code for turning a list of turns into a formatted text prompt:
145
+
146
+ ```python
147
+ from typing import List
148
+
149
+ chat = [
150
+ {"role": "user", "content":"2 + 2 = 4"},
151
+ {"role": "assistant", "content": "If 2 + 2 equals 4, what does it mean for the concept of truth in mathematics? Is it absolute?"},
152
+ {"role": "user", "content": "Mathematical truths are absolute."},
153
+ {"role": "assistant", "content": "If that's the case, how do you explain the existence of mathematical contradictions in certain systems? Does that undermine the idea of absolute truth in mathematics?"}
154
+ ]
155
+
156
+ SYS_PROMPT = """<|system|>
157
+ You, assistant, are a philosophy expert engaging in a socratic discussion about a particular philosphical concept with me, user.
158
+ The first speaker, user, will seek to make claims about a stance.
159
+ The second speaker, assistant, will play devil's advocate and respond with a question about what user has said that seeks to expand the conversation."""
160
+
161
+ def generate_fulltext_prompt(chat: List[dict]):
162
+ lines = []
163
+ for turn in chat:
164
+ if turn['role'] == 'user':
165
+ lines.append(f"<|user|> {turn['content']}")
166
+ elif turn['role'] == 'assistant':
167
+ lines.append(f"<|assistant|> {turn['content']}")
168
+ return "\n".join([SYS_PROMPT,*lines])
169
+ ```
170
+
171
+
172
  ## 📝 Prompt Format
173
 
174
  Elenky uses a custom prompting strategy that leverages elements of chain-of-thought (CoT) prompting. Each interaction tells the model to act as a philosophy expert committed to open inquiry. Rather than aiming for quick answers, it’s guided to ask probing questions, reveal contradictions, and deepen the conversation.