tmasis commited on
Commit
f9447c8
·
verified ·
1 Parent(s): 70476af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -2
README.md CHANGED
@@ -23,10 +23,42 @@ The model is trained on 13k examples from the training subset of the [GeoCoDe da
23
  ### Limitations
24
  Due to data limitations, this model has been trained and evaluated for our task only in Mainstream American English.
25
 
26
-
27
- ### Usage
28
  The following code snippet illustrates how to use the model. For the system prompt we used and for example prompts, please see the appendices in the accompanying paper.
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  ```python
31
  from transformers import AutoModelForCausalLM, AutoTokenizer
32
 
 
23
  ### Limitations
24
  Due to data limitations, this model has been trained and evaluated for our task only in Mainstream American English.
25
 
26
+ ### Usage (unsloth)
 
27
  The following code snippet illustrates how to use the model. For the system prompt we used and for example prompts, please see the appendices in the accompanying paper.
28
 
29
+ ```python
30
+ from unsloth import FastLanguageModel
31
+ import torch
32
+
33
+ model_name = "tmasis/geocoding-complex-location-references"
34
+
35
+ # Load model and tokenizer from Huggingface Hub
36
+ model, tokenizer = FastLanguageModel.from_pretrained(model_name)
37
+ model_name = model_name,
38
+ max_seq_length = 2048,
39
+ load_in_4bit = True,
40
+ )
41
+ FastLanguageModel.for_inference(model)
42
+
43
+ # Prepare model input
44
+ messages = [{"role": "system", "content": <system_prompt>},
45
+ {"role": "user", "content": <prompt>}]
46
+ text = tokenizer.apply_chat_template(messages,
47
+ tokenize=False,
48
+ add_generation_prompt = True,
49
+ enable_thinking = False
50
+ )
51
+
52
+ # Conduct text generation
53
+ outputs = model.generate(**tokenizer(text, return_tensors="pt").to(model.device),
54
+ max_new_tokens=1024, temperature=0.7, top_p=0.8, top_k=20)
55
+ response = tokenizer.batch_decode(outputs)[0]
56
+ print(response)
57
+ ```
58
+
59
+ ### Usage (HuggingFace transformers)
60
+ Alternatively, you can use the HuggingFace transformers library.
61
+
62
  ```python
63
  from transformers import AutoModelForCausalLM, AutoTokenizer
64