Professor commited on
Commit
bd1a69a
·
verified ·
1 Parent(s): d5ef43e

add codes for testing

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md CHANGED
@@ -32,6 +32,33 @@ The fine-tuning was performed using the PEFT-LoRa technique, aiming to improve t
32
  - Generation of Yoruba text with correct diacritics
33
  - Natural language processing tasks for Yoruba language
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ## Intended uses & limitations
36
 
37
  More information coming
 
32
  - Generation of Yoruba text with correct diacritics
33
  - Natural language processing tasks for Yoruba language
34
 
35
+ ## Code for Testing:
36
+
37
+ ```python
38
+ import torch
39
+ from peft import PeftModel, PeftConfig
40
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
41
+
42
+ config = PeftConfig.from_pretrained("Professor/yoruba-diacritics-quantized")
43
+ model = AutoModelForSeq2SeqLM.from_pretrained("Davlan/mT5_base_yoruba_adr")
44
+ model = PeftModel.from_pretrained(model, "Professor/yoruba-diacritics-quantized")
45
+ tokenizer = AutoTokenizer.from_pretrained("Davlan/mT5_base_yoruba_adr")
46
+
47
+ inputs = tokenizer(
48
+ "Mo ti so fun bobo yen sha, aaro la wa bayi",
49
+ return_tensors="pt",
50
+ )
51
+
52
+ device = "cpu" # use your GPU if you have
53
+
54
+ model.to(device)
55
+
56
+ with torch.no_grad():
57
+ inputs = {k: v.to(device) for k, v in inputs.items()}
58
+ outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10)
59
+ print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
60
+ ```
61
+
62
  ## Intended uses & limitations
63
 
64
  More information coming