staedi commited on
Commit
a19b59a
·
verified ·
1 Parent(s): fa57d6c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -8
README.md CHANGED
@@ -209,6 +209,8 @@ base_model: meta-llama/llama-3.2-3B-Instruct
209
 
210
  # staedi/coref-llama-3.2
211
 
 
 
212
  This model [staedi/coref-llama-3.2](https://huggingface.co/staedi/coref-llama-3.2) was
213
  converted to MLX format from [meta-llama/llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/llama-3.2-3B-Instruct)
214
  using mlx-lm version **0.26.2**.
@@ -221,16 +223,20 @@ pip install mlx-lm
221
 
222
  ```python
223
  from mlx_lm import load, generate
224
-
225
- model, tokenizer = load("staedi/coref-llama-3.2")
226
-
227
- prompt = "hello"
228
-
 
 
 
 
 
229
  if tokenizer.chat_template is not None:
230
- messages = [{"role": "user", "content": prompt}]
231
  prompt = tokenizer.apply_chat_template(
232
  messages, add_generation_prompt=True
233
  )
234
-
235
  response = generate(model, tokenizer, prompt=prompt, verbose=True)
236
- ```
 
209
 
210
  # staedi/coref-llama-3.2
211
 
212
+ Coreference Resolution fine-tuned model with `mlx-lm` (Base model: `meta-llama/llama-3.2-3B-Instruct`).
213
+
214
  This model [staedi/coref-llama-3.2](https://huggingface.co/staedi/coref-llama-3.2) was
215
  converted to MLX format from [meta-llama/llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/llama-3.2-3B-Instruct)
216
  using mlx-lm version **0.26.2**.
 
223
 
224
  ```python
225
  from mlx_lm import load, generate
226
+ # Load model with adapter
227
+ model, tokenizer = load("staedi/gemma-3-coref")
228
+ # Text to resolve coreferences in
229
+ text = "Apple announced its earnings. The company performed well."
230
+ # Create prompt
231
+ prompt = (
232
+ "Resolve allcoreferences in the following text by replacing pronouns and "
233
+ "descriptive references with their original entities. Maintain the same "
234
+ "meaning and structure while making all references explicit: \n " + text
235
+ )
236
  if tokenizer.chat_template is not None:
237
+ messages = [{'role':'user','content':prompt}]
238
  prompt = tokenizer.apply_chat_template(
239
  messages, add_generation_prompt=True
240
  )
 
241
  response = generate(model, tokenizer, prompt=prompt, verbose=True)
242
+ ```