zay25 commited on
Commit
4d62aa7
·
verified ·
1 Parent(s): d2b2a1e
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -30,15 +30,6 @@ This model is intended for multiple-choice question answering (MCQA) tasks, part
30
  - Not intended for open-ended or dialog generation
31
  - Not suitable for high-stakes decision-making or critical applications without human oversight
32
 
33
- ---
34
-
35
- ## How to Use
36
-
37
- ```python
38
- from transformers import AutoModelForCausalLM, AutoTokenizer
39
-
40
- model = AutoModelForCausalLM.from_pretrained("zay25/MNLP_M3_quantized_model", trust_remote_code=True)
41
- tokenizer = AutoTokenizer.from_pretrained("zay25/MNLP_M3_quantized_model")
42
 
43
  ## Training Details
44
 
@@ -50,3 +41,12 @@ tokenizer = AutoTokenizer.from_pretrained("zay25/MNLP_M3_quantized_model")
50
  - Per-channel zero point: enabled
51
  - **Calibration dataset**: 512 samples from `hssawhney/Reasoning-Dataset`
52
 
 
 
 
 
 
 
 
 
 
 
30
  - Not intended for open-ended or dialog generation
31
  - Not suitable for high-stakes decision-making or critical applications without human oversight
32
 
 
 
 
 
 
 
 
 
 
33
 
34
  ## Training Details
35
 
 
41
  - Per-channel zero point: enabled
42
  - **Calibration dataset**: 512 samples from `hssawhney/Reasoning-Dataset`
43
 
44
+ ---
45
+
46
+ ## How to Use
47
+
48
+ ```python
49
+ from transformers import AutoModelForCausalLM, AutoTokenizer
50
+
51
+ model = AutoModelForCausalLM.from_pretrained("zay25/MNLP_M3_quantized_model", trust_remote_code=True)
52
+ tokenizer = AutoTokenizer.from_pretrained("zay25/MNLP_M3_quantized_model")