Zenos5 commited on
Commit
766ea9e
·
verified ·
1 Parent(s): 90ccd26

Upload 24 files

Browse files
Llemma_Finetuned.py ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import transformers
2
+ import torch
3
+ # from transformers import BitsAndBytesConfig
4
+ from transformers import AutoModelForCausalLM, AutoTokenizer
5
+
6
+ from deepeval.models import DeepEvalBaseLLM
7
+
8
+
9
+ class Llemma_Finetuned(DeepEvalBaseLLM):
10
+ def __init__(
11
+ self,
12
+ model,
13
+ tokenizer
14
+ ):
15
+ self.model = model
16
+ self.tokenizer = tokenizer
17
+
18
+ def load_model(self):
19
+ return self.model
20
+
21
+ def generate(self, prompt: str) -> str:
22
+ model = self.load_model()
23
+
24
+ device = "cuda" # the device to load the model onto
25
+
26
+ model_inputs = self.tokenizer([prompt], return_tensors="pt").to(device)
27
+ model.to(device)
28
+
29
+ generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
30
+ return self.tokenizer.batch_decode(generated_ids)[0]
31
+
32
+ async def a_generate(self, prompt: str) -> str:
33
+ return self.generate(prompt)
34
+
35
+ def get_model_name(self):
36
+ return "Llemma Finetuned"
before_run.sh ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # module load python/3.11
4
+ # pip install virtualenv
5
+
6
+ # python -m venv mse_env
7
+ source ./mse_env/Scripts/activate
8
+ # pip3 install pnglatex
9
+ # pip install --upgrade pip
10
+ pip install -r requirements.txt
11
+ # pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
12
+
13
+ # pip3 install transformers datasets accelerate
14
+ # pip uninstall install llm-toolkit
15
+ # pip install -q -U transformers accelerate bitsandbytes seqeval evaluate trl peft
16
+ # pip3 install -q -U bitsandbytes==0.42.0
17
+ # pip3 install -q -U peft==0.8.2
18
+ # pip3 install -q -U trl==0.7.10
19
+ # pip3 install -q -U accelerate==0.27.1
20
+ # pip3 install -q -U datasets==2.17.0
21
+ # pip3 install -q -U transformers==4.38.0
22
+ # pip3 install zope.interface=e0n<
23
+ # pip3 install jsonl2json
dataset_row_stl.txt ADDED
@@ -0,0 +1,3243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 3200
2
+ 246
3
+ 2680
4
+ 1090
5
+ 2739
6
+ 2385
7
+ 199
8
+ 3193
9
+ 2757
10
+ 64
11
+ 2670
12
+ 720
13
+ 3023
14
+ 3137
15
+ 2682
16
+ 249
17
+ 2073
18
+ 2646
19
+ 754
20
+ 732
21
+ 3215
22
+ 2800
23
+ 2731
24
+ 3034
25
+ 1940
26
+ 2791
27
+ 3190
28
+ 2324
29
+ 2302
30
+ 212
31
+ 2978
32
+ 768
33
+ 2817
34
+ 1729
35
+ 548
36
+ 2337
37
+ 2393
38
+ 2456
39
+ 762
40
+ 594
41
+ 2779
42
+ 253
43
+ 2986
44
+ 220
45
+ 11
46
+ 265
47
+ 2960
48
+ 1621
49
+ 2343
50
+ 1064
51
+ 3241
52
+ 778
53
+ 2782
54
+ 5
55
+ 2804
56
+ 3064
57
+ 1514
58
+ 3044
59
+ 256
60
+ 2729
61
+ 3025
62
+ 1646
63
+ 2773
64
+ 1519
65
+ 2314
66
+ 3051
67
+ 2745
68
+ 2637
69
+ 75
70
+ 2777
71
+ 3126
72
+ 267
73
+ 2650
74
+ 932
75
+ 3138
76
+ 774
77
+ 2266
78
+ 3192
79
+ 2396
80
+ 88
81
+ 1151
82
+ 340
83
+ 1292
84
+ 2384
85
+ 2716
86
+ 1656
87
+ 2642
88
+ 3170
89
+ 779
90
+ 3177
91
+ 2251
92
+ 3063
93
+ 1611
94
+ 701
95
+ 271
96
+ 283
97
+ 1800
98
+ 478
99
+ 3069
100
+ 3173
101
+ 91
102
+ 1223
103
+ 2356
104
+ 1126
105
+ 2676
106
+ 3046
107
+ 131
108
+ 282
109
+ 3011
110
+ 2056
111
+ 1283
112
+ 1986
113
+ 2234
114
+ 2813
115
+ 1362
116
+ 3065
117
+ 2715
118
+ 1980
119
+ 2169
120
+ 734
121
+ 1590
122
+ 2191
123
+ 285
124
+ 844
125
+ 1544
126
+ 1518
127
+ 2188
128
+ 2806
129
+ 2816
130
+ 3232
131
+ 3004
132
+ 3005
133
+ 782
134
+ 1605
135
+ 1239
136
+ 2700
137
+ 2766
138
+ 3220
139
+ 3057
140
+ 3080
141
+ 70
142
+ 919
143
+ 2945
144
+ 79
145
+ 1456
146
+ 47
147
+ 264
148
+ 270
149
+ 2464
150
+ 375
151
+ 606
152
+ 257
153
+ 1999
154
+ 2494
155
+ 1709
156
+ 2645
157
+ 73
158
+ 1222
159
+ 1645
160
+ 2301
161
+ 1630
162
+ 290
163
+ 2048
164
+ 16
165
+ 1871
166
+ 2349
167
+ 2934
168
+ 1050
169
+ 217
170
+ 3007
171
+ 3128
172
+ 251
173
+ 2674
174
+ 3058
175
+ 3168
176
+ 1062
177
+ 2737
178
+ 42
179
+ 2636
180
+ 3098
181
+ 2954
182
+ 59
183
+ 215
184
+ 1275
185
+ 1982
186
+ 1674
187
+ 3032
188
+ 52
189
+ 2661
190
+ 288
191
+ 662
192
+ 933
193
+ 45
194
+ 2699
195
+ 30
196
+ 2693
197
+ 742
198
+ 222
199
+ 25
200
+ 3214
201
+ 3157
202
+ 793
203
+ 2752
204
+ 1998
205
+ 747
206
+ 2552
207
+ 43
208
+ 3198
209
+ 2336
210
+ 244
211
+ 1898
212
+ 146
213
+ 258
214
+ 529
215
+ 1521
216
+ 3087
217
+ 304
218
+ 2924
219
+ 2244
220
+ 247
221
+ 2072
222
+ 1961
223
+ 2501
224
+ 2955
225
+ 2669
226
+ 1695
227
+ 1
228
+ 338
229
+ 3227
230
+ 2296
231
+ 3240
232
+ 1265
233
+ 1419
234
+ 3205
235
+ 2759
236
+ 3076
237
+ 204
238
+ 954
239
+ 2678
240
+ 161
241
+ 3155
242
+ 302
243
+ 1500
244
+ 1884
245
+ 2673
246
+ 250
247
+ 1619
248
+ 2798
249
+ 276
250
+ 999
251
+ 2107
252
+ 1974
253
+ 1862
254
+ 2794
255
+ 3037
256
+ 2078
257
+ 1750
258
+ 2537
259
+ 320
260
+ 3130
261
+ 2122
262
+ 2476
263
+ 32
264
+ 287
265
+ 787
266
+ 1610
267
+ 2253
268
+ 1677
269
+ 721
270
+ 1665
271
+ 2662
272
+ 2068
273
+ 1320
274
+ 902
275
+ 922
276
+ 3213
277
+ 789
278
+ 1985
279
+ 231
280
+ 512
281
+ 576
282
+ 19
283
+ 274
284
+ 1209
285
+ 2792
286
+ 2183
287
+ 2354
288
+ 1505
289
+ 1523
290
+ 1603
291
+ 1919
292
+ 147
293
+ 2320
294
+ 2815
295
+ 2472
296
+ 1959
297
+ 2217
298
+ 1499
299
+ 3219
300
+ 1797
301
+ 2708
302
+ 3002
303
+ 1679
304
+ 2623
305
+ 1056
306
+ 2760
307
+ 2225
308
+ 728
309
+ 1658
310
+ 908
311
+ 2571
312
+ 2735
313
+ 1893
314
+ 1529
315
+ 248
316
+ 2640
317
+ 2802
318
+ 34
319
+ 685
320
+ 1650
321
+ 2131
322
+ 1620
323
+ 2778
324
+ 396
325
+ 1299
326
+ 6
327
+ 584
328
+ 1334
329
+ 1615
330
+ 1453
331
+ 2717
332
+ 1584
333
+ 216
334
+ 289
335
+ 2338
336
+ 208
337
+ 2229
338
+ 2796
339
+ 2814
340
+ 851
341
+ 90
342
+ 224
343
+ 505
344
+ 3071
345
+ 353
346
+ 2937
347
+ 849
348
+ 2339
349
+ 1385
350
+ 1641
351
+ 751
352
+ 2298
353
+ 2684
354
+ 3093
355
+ 1414
356
+ 1587
357
+ 2560
358
+ 9
359
+ 1673
360
+ 755
361
+ 2775
362
+ 749
363
+ 2663
364
+ 7
365
+ 1023
366
+ 2382
367
+ 2600
368
+ 2608
369
+ 668
370
+ 2363
371
+ 2165
372
+ 2173
373
+ 1796
374
+ 446
375
+ 3187
376
+ 2630
377
+ 2807
378
+ 2194
379
+ 2204
380
+ 1368
381
+ 2134
382
+ 2836
383
+ 3082
384
+ 522
385
+ 2486
386
+ 931
387
+ 2358
388
+ 2990
389
+ 487
390
+ 949
391
+ 1531
392
+ 1648
393
+ 597
394
+ 2672
395
+ 462
396
+ 2820
397
+ 3133
398
+ 725
399
+ 2063
400
+ 2523
401
+ 1407
402
+ 1520
403
+ 1383
404
+ 2111
405
+ 972
406
+ 869
407
+ 3127
408
+ 1296
409
+ 1885
410
+ 2533
411
+ 3206
412
+ 1113
413
+ 1686
414
+ 1773
415
+ 1854
416
+ 3226
417
+ 166
418
+ 763
419
+ 1445
420
+ 2923
421
+ 2941
422
+ 3142
423
+ 947
424
+ 2053
425
+ 3150
426
+ 664
427
+ 625
428
+ 1107
429
+ 2999
430
+ 479
431
+ 561
432
+ 1714
433
+ 2602
434
+ 2305
435
+ 3078
436
+ 501
437
+ 907
438
+ 2237
439
+ 2647
440
+ 2656
441
+ 121
442
+ 1541
443
+ 1623
444
+ 3
445
+ 252
446
+ 2748
447
+ 3180
448
+ 562
449
+ 2242
450
+ 129
451
+ 723
452
+ 424
453
+ 1604
454
+ 1627
455
+ 1540
456
+ 2784
457
+ 1617
458
+ 3135
459
+ 233
460
+ 568
461
+ 2695
462
+ 566
463
+ 1446
464
+ 2285
465
+ 84
466
+ 351
467
+ 1443
468
+ 3021
469
+ 3059
470
+ 3061
471
+ 273
472
+ 460
473
+ 3062
474
+ 520
475
+ 1558
476
+ 1708
477
+ 2318
478
+ 588
479
+ 764
480
+ 2554
481
+ 2681
482
+ 57
483
+ 514
484
+ 2720
485
+ 2765
486
+ 1516
487
+ 1457
488
+ 711
489
+ 1682
490
+ 2386
491
+ 65
492
+ 1701
493
+ 2310
494
+ 1378
495
+ 1430
496
+ 279
497
+ 3166
498
+ 1706
499
+ 739
500
+ 55
501
+ 2015
502
+ 40
503
+ 371
504
+ 905
505
+ 225
506
+ 998
507
+ 1330
508
+ 1426
509
+ 3235
510
+ 278
511
+ 3160
512
+ 493
513
+ 759
514
+ 1957
515
+ 1864
516
+ 842
517
+ 1628
518
+ 1997
519
+ 2087
520
+ 2141
521
+ 241
522
+ 2683
523
+ 51
524
+ 906
525
+ 2572
526
+ 2723
527
+ 2365
528
+ 2145
529
+ 117
530
+ 1870
531
+ 2439
532
+ 2624
533
+ 2742
534
+ 3070
535
+ 209
536
+ 880
537
+ 1672
538
+ 2935
539
+ 3101
540
+ 20
541
+ 1034
542
+ 1256
543
+ 2770
544
+ 1911
545
+ 296
546
+ 2457
547
+ 103
548
+ 2088
549
+ 495
550
+ 339
551
+ 48
552
+ 578
553
+ 2411
554
+ 2761
555
+ 1236
556
+ 182
557
+ 400
558
+ 1032
559
+ 2772
560
+ 3230
561
+ 1063
562
+ 1613
563
+ 36
564
+ 2489
565
+ 2345
566
+ 2569
567
+ 331
568
+ 1743
569
+ 2092
570
+ 2541
571
+ 2334
572
+ 1081
573
+ 1659
574
+ 1415
575
+ 158
576
+ 676
577
+ 1260
578
+ 1827
579
+ 3243
580
+ 1551
581
+ 1805
582
+ 1950
583
+ 2548
584
+ 1237
585
+ 1836
586
+ 2399
587
+ 2781
588
+ 1031
589
+ 1660
590
+ 1953
591
+ 136
592
+ 2991
593
+ 1536
594
+ 1703
595
+ 2992
596
+ 1084
597
+ 2484
598
+ 1533
599
+ 2261
600
+ 961
601
+ 1427
602
+ 3031
603
+ 1490
604
+ 750
605
+ 124
606
+ 2089
607
+ 3231
608
+ 1294
609
+ 485
610
+ 1547
611
+ 2379
612
+ 3239
613
+ 2732
614
+ 137
615
+ 560
616
+ 1020
617
+ 3050
618
+ 3111
619
+ 196
620
+ 1622
621
+ 2397
622
+ 912
623
+ 1764
624
+ 975
625
+ 2853
626
+ 991
627
+ 1125
628
+ 255
629
+ 1338
630
+ 1693
631
+ 2998
632
+ 1506
633
+ 1822
634
+ 3204
635
+ 2518
636
+ 441
637
+ 719
638
+ 827
639
+ 1863
640
+ 29
641
+ 1033
642
+ 2725
643
+ 119
644
+ 349
645
+ 1038
646
+ 3088
647
+ 2513
648
+ 2539
649
+ 3113
650
+ 239
651
+ 692
652
+ 3008
653
+ 2193
654
+ 436
655
+ 359
656
+ 1070
657
+ 781
658
+ 2135
659
+ 551
660
+ 1626
661
+ 2829
662
+ 3075
663
+ 3117
664
+ 1073
665
+ 707
666
+ 2987
667
+ 227
668
+ 1357
669
+ 263
670
+ 3132
671
+ 2002
672
+ 3163
673
+ 1371
674
+ 1400
675
+ 1602
676
+ 2012
677
+ 1865
678
+ 2601
679
+ 2149
680
+ 164
681
+ 3154
682
+ 2521
683
+ 1996
684
+ 2487
685
+ 2655
686
+ 714
687
+ 3194
688
+ 286
689
+ 1545
690
+ 1598
691
+ 2168
692
+ 2730
693
+ 1455
694
+ 3019
695
+ 3146
696
+ 717
697
+ 1293
698
+ 1449
699
+ 2556
700
+ 627
701
+ 2498
702
+ 2593
703
+ 1944
704
+ 2746
705
+ 2162
706
+ 2211
707
+ 2707
708
+ 307
709
+ 890
710
+ 81
711
+ 786
712
+ 2245
713
+ 2437
714
+ 2181
715
+ 2660
716
+ 2067
717
+ 82
718
+ 3077
719
+ 2090
720
+ 3033
721
+ 1286
722
+ 1924
723
+ 2085
724
+ 360
725
+ 2309
726
+ 235
727
+ 1253
728
+ 1437
729
+ 2155
730
+ 26
731
+ 2270
732
+ 532
733
+ 1120
734
+ 3164
735
+ 412
736
+ 2254
737
+ 1218
738
+ 2381
739
+ 2026
740
+ 2429
741
+ 1241
742
+ 938
743
+ 1006
744
+ 2006
745
+ 202
746
+ 1329
747
+ 2267
748
+ 894
749
+ 2696
750
+ 1906
751
+ 2736
752
+ 3222
753
+ 1638
754
+ 2788
755
+ 3079
756
+ 46
757
+ 165
758
+ 492
759
+ 730
760
+ 1634
761
+ 2125
762
+ 1583
763
+ 599
764
+ 2677
765
+ 541
766
+ 1101
767
+ 1683
768
+ 1697
769
+ 2733
770
+ 559
771
+ 818
772
+ 2156
773
+ 2801
774
+ 2138
775
+ 2370
776
+ 544
777
+ 2220
778
+ 3125
779
+ 797
780
+ 1060
781
+ 1279
782
+ 1089
783
+ 2333
784
+ 2371
785
+ 336
786
+ 1012
787
+ 2691
788
+ 415
789
+ 2942
790
+ 595
791
+ 737
792
+ 1347
793
+ 1819
794
+ 2187
795
+ 262
796
+ 523
797
+ 605
798
+ 1392
799
+ 2207
800
+ 2795
801
+ 2809
802
+ 2344
803
+ 1579
804
+ 2989
805
+ 12
806
+ 450
807
+ 3237
808
+ 2126
809
+ 2264
810
+ 2512
811
+ 3020
812
+ 1552
813
+ 1887
814
+ 1812
815
+ 313
816
+ 1848
817
+ 2926
818
+ 3199
819
+ 2367
820
+ 346
821
+ 1737
822
+ 1823
823
+ 1945
824
+ 519
825
+ 1436
826
+ 2062
827
+ 2195
828
+ 1333
829
+ 2767
830
+ 1306
831
+ 2164
832
+ 2808
833
+ 1208
834
+ 1904
835
+ 2789
836
+ 655
837
+ 2128
838
+ 2185
839
+ 411
840
+ 1849
841
+ 92
842
+ 780
843
+ 2103
844
+ 2166
845
+ 297
846
+ 1304
847
+ 2011
848
+ 957
849
+ 1069
850
+ 1314
851
+ 1393
852
+ 2419
853
+ 1342
854
+ 1018
855
+ 1991
856
+ 452
857
+ 757
858
+ 1818
859
+ 1943
860
+ 3040
861
+ 1692
862
+ 293
863
+ 942
864
+ 1798
865
+ 1696
866
+ 2274
867
+ 2573
868
+ 234
869
+ 982
870
+ 2785
871
+ 2966
872
+ 429
873
+ 607
874
+ 838
875
+ 1112
876
+ 3056
877
+ 2634
878
+ 2127
879
+ 145
880
+ 1652
881
+ 997
882
+ 1971
883
+ 2461
884
+ 3072
885
+ 567
886
+ 1966
887
+ 327
888
+ 1791
889
+ 187
890
+ 580
891
+ 619
892
+ 530
893
+ 2819
894
+ 2947
895
+ 1667
896
+ 342
897
+ 1083
898
+ 2639
899
+ 1832
900
+ 2689
901
+ 421
902
+ 2083
903
+ 2391
904
+ 2633
905
+ 2671
906
+ 2038
907
+ 83
908
+ 769
909
+ 2066
910
+ 2762
911
+ 2198
912
+ 3009
913
+ 384
914
+ 2407
915
+ 3152
916
+ 2988
917
+ 439
918
+ 2380
919
+ 2535
920
+ 2953
921
+ 1909
922
+ 792
923
+ 1761
924
+ 2743
925
+ 652
926
+ 1281
927
+ 1918
928
+ 2776
929
+ 41
930
+ 826
931
+ 706
932
+ 326
933
+ 639
934
+ 1203
935
+ 2416
936
+ 1847
937
+ 2698
938
+ 260
939
+ 558
940
+ 1067
941
+ 1205
942
+ 1364
943
+ 3233
944
+ 93
945
+ 2424
946
+ 612
947
+ 475
948
+ 2321
949
+ 118
950
+ 2615
951
+ 923
952
+ 2561
953
+ 409
954
+ 2364
955
+ 466
956
+ 1762
957
+ 468
958
+ 1785
959
+ 301
960
+ 3182
961
+ 2352
962
+ 358
963
+ 2958
964
+ 393
965
+ 1624
966
+ 2307
967
+ 10
968
+ 2516
969
+ 841
970
+ 1542
971
+ 2865
972
+ 691
973
+ 1612
974
+ 3112
975
+ 2
976
+ 14
977
+ 3122
978
+ 144
979
+ 556
980
+ 1450
981
+ 2093
982
+ 604
983
+ 617
984
+ 2532
985
+ 1914
986
+ 632
987
+ 683
988
+ 834
989
+ 317
990
+ 486
991
+ 703
992
+ 1441
993
+ 3153
994
+ 292
995
+ 2199
996
+ 614
997
+ 2750
998
+ 143
999
+ 322
1000
+ 2741
1001
+ 1197
1002
+ 1367
1003
+ 2041
1004
+ 3195
1005
+ 573
1006
+ 1233
1007
+ 1363
1008
+ 2589
1009
+ 303
1010
+ 344
1011
+ 2036
1012
+ 1507
1013
+ 3092
1014
+ 3119
1015
+ 2226
1016
+ 206
1017
+ 2413
1018
+ 122
1019
+ 1735
1020
+ 2751
1021
+ 936
1022
+ 1469
1023
+ 2738
1024
+ 986
1025
+ 3038
1026
+ 2617
1027
+ 324
1028
+ 2297
1029
+ 2553
1030
+ 3048
1031
+ 213
1032
+ 1121
1033
+ 2405
1034
+ 3167
1035
+ 60
1036
+ 240
1037
+ 1734
1038
+ 3178
1039
+ 3188
1040
+ 427
1041
+ 709
1042
+ 2299
1043
+ 621
1044
+ 1527
1045
+ 1394
1046
+ 1560
1047
+ 1923
1048
+ 1004
1049
+ 2342
1050
+ 2566
1051
+ 948
1052
+ 1336
1053
+ 3169
1054
+ 545
1055
+ 2332
1056
+ 2536
1057
+ 2562
1058
+ 1245
1059
+ 2143
1060
+ 968
1061
+ 1569
1062
+ 1962
1063
+ 2178
1064
+ 1784
1065
+ 2154
1066
+ 2744
1067
+ 953
1068
+ 1016
1069
+ 2206
1070
+ 2287
1071
+ 37
1072
+ 211
1073
+ 1572
1074
+ 888
1075
+ 1440
1076
+ 2281
1077
+ 1916
1078
+ 3054
1079
+ 1308
1080
+ 1684
1081
+ 1804
1082
+ 2701
1083
+ 78
1084
+ 2042
1085
+ 770
1086
+ 2366
1087
+ 2369
1088
+ 1402
1089
+ 197
1090
+ 2054
1091
+ 2171
1092
+ 2944
1093
+ 645
1094
+ 1353
1095
+ 1770
1096
+ 2939
1097
+ 416
1098
+ 3225
1099
+ 542
1100
+ 1776
1101
+ 1929
1102
+ 2973
1103
+ 565
1104
+ 935
1105
+ 1666
1106
+ 1369
1107
+ 291
1108
+ 1625
1109
+ 105
1110
+ 1289
1111
+ 210
1112
+ 854
1113
+ 2787
1114
+ 1866
1115
+ 207
1116
+ 385
1117
+ 718
1118
+ 1043
1119
+ 1076
1120
+ 694
1121
+ 729
1122
+ 2361
1123
+ 3221
1124
+ 653
1125
+ 54
1126
+ 788
1127
+ 1280
1128
+ 3131
1129
+ 2709
1130
+ 539
1131
+ 3124
1132
+ 1657
1133
+ 939
1134
+ 969
1135
+ 1756
1136
+ 2390
1137
+ 203
1138
+ 521
1139
+ 510
1140
+ 1332
1141
+ 2279
1142
+ 672
1143
+ 1915
1144
+ 1931
1145
+ 39
1146
+ 426
1147
+ 791
1148
+ 1794
1149
+ 928
1150
+ 930
1151
+ 1021
1152
+ 1905
1153
+ 3238
1154
+ 1710
1155
+ 1309
1156
+ 1724
1157
+ 1890
1158
+ 1405
1159
+ 710
1160
+ 98
1161
+ 1424
1162
+ 1668
1163
+ 2544
1164
+ 134
1165
+ 1573
1166
+ 1698
1167
+ 2102
1168
+ 2451
1169
+ 2705
1170
+ 2110
1171
+ 2248
1172
+ 2410
1173
+ 2690
1174
+ 2208
1175
+ 1517
1176
+ 1565
1177
+ 2119
1178
+ 644
1179
+ 2249
1180
+ 2706
1181
+ 800
1182
+ 966
1183
+ 1219
1184
+ 2549
1185
+ 3104
1186
+ 77
1187
+ 1287
1188
+ 2594
1189
+ 35
1190
+ 2189
1191
+ 3022
1192
+ 1954
1193
+ 3042
1194
+ 2120
1195
+ 2564
1196
+ 1897
1197
+ 2108
1198
+ 623
1199
+ 2152
1200
+ 341
1201
+ 1841
1202
+ 2417
1203
+ 1305
1204
+ 214
1205
+ 2526
1206
+ 68
1207
+ 1843
1208
+ 589
1209
+ 1324
1210
+ 1015
1211
+ 1681
1212
+ 1315
1213
+ 2104
1214
+ 2177
1215
+ 335
1216
+ 232
1217
+ 506
1218
+ 513
1219
+ 3097
1220
+ 1130
1221
+ 1829
1222
+ 1395
1223
+ 1726
1224
+ 8
1225
+ 72
1226
+ 1801
1227
+ 1851
1228
+ 2510
1229
+ 1227
1230
+ 2897
1231
+ 577
1232
+ 2282
1233
+ 2300
1234
+ 1082
1235
+ 2209
1236
+ 853
1237
+ 1272
1238
+ 2582
1239
+ 17
1240
+ 62
1241
+ 753
1242
+ 3201
1243
+ 2970
1244
+ 1928
1245
+ 1509
1246
+ 1472
1247
+ 2101
1248
+ 1471
1249
+ 1799
1250
+ 28
1251
+ 767
1252
+ 945
1253
+ 1760
1254
+ 2665
1255
+ 1664
1256
+ 2074
1257
+ 1040
1258
+ 1288
1259
+ 1307
1260
+ 1820
1261
+ 398
1262
+ 1783
1263
+ 929
1264
+ 1717
1265
+ 2485
1266
+ 2590
1267
+ 2638
1268
+ 1972
1269
+ 2130
1270
+ 2139
1271
+ 2505
1272
+ 2983
1273
+ 1564
1274
+ 2213
1275
+ 2580
1276
+ 647
1277
+ 2065
1278
+ 102
1279
+ 195
1280
+ 499
1281
+ 700
1282
+ 1616
1283
+ 1873
1284
+ 1007
1285
+ 2493
1286
+ 2278
1287
+ 2768
1288
+ 722
1289
+ 1325
1290
+ 1495
1291
+ 1958
1292
+ 2856
1293
+ 86
1294
+ 2167
1295
+ 2482
1296
+ 363
1297
+ 1846
1298
+ 1589
1299
+ 925
1300
+ 2376
1301
+ 1699
1302
+ 2981
1303
+ 184
1304
+ 2201
1305
+ 3149
1306
+ 3099
1307
+ 1094
1308
+ 1274
1309
+ 1360
1310
+ 2432
1311
+ 3162
1312
+ 2740
1313
+ 3116
1314
+ 1399
1315
+ 185
1316
+ 3159
1317
+ 2230
1318
+ 2447
1319
+ 2950
1320
+ 712
1321
+ 2275
1322
+ 74
1323
+ 699
1324
+ 1853
1325
+ 1377
1326
+ 2029
1327
+ 3147
1328
+ 3197
1329
+ 973
1330
+ 1277
1331
+ 2515
1332
+ 2129
1333
+ 2749
1334
+ 261
1335
+ 399
1336
+ 1561
1337
+ 2121
1338
+ 205
1339
+ 940
1340
+ 1136
1341
+ 900
1342
+ 1700
1343
+ 2543
1344
+ 3174
1345
+ 308
1346
+ 773
1347
+ 2968
1348
+ 2450
1349
+ 1595
1350
+ 831
1351
+ 1268
1352
+ 1831
1353
+ 2659
1354
+ 50
1355
+ 618
1356
+ 1316
1357
+ 772
1358
+ 2288
1359
+ 445
1360
+ 1370
1361
+ 3189
1362
+ 2005
1363
+ 2940
1364
+ 1512
1365
+ 1732
1366
+ 862
1367
+ 951
1368
+ 1341
1369
+ 1359
1370
+ 1951
1371
+ 2928
1372
+ 1880
1373
+ 2247
1374
+ 2685
1375
+ 2075
1376
+ 1599
1377
+ 2575
1378
+ 2205
1379
+ 2558
1380
+ 2112
1381
+ 552
1382
+ 613
1383
+ 269
1384
+ 281
1385
+ 1432
1386
+ 937
1387
+ 295
1388
+ 1358
1389
+ 13
1390
+ 391
1391
+ 2402
1392
+ 2946
1393
+ 1538
1394
+ 2697
1395
+ 3129
1396
+ 1312
1397
+ 1988
1398
+ 1476
1399
+ 2529
1400
+ 3212
1401
+ 474
1402
+ 189
1403
+ 1661
1404
+ 1425
1405
+ 63
1406
+ 494
1407
+ 2769
1408
+ 2949
1409
+ 2982
1410
+ 916
1411
+ 1705
1412
+ 2057
1413
+ 2557
1414
+ 2584
1415
+ 579
1416
+ 1212
1417
+ 1596
1418
+ 3029
1419
+ 1606
1420
+ 1711
1421
+ 1147
1422
+ 1431
1423
+ 679
1424
+ 1763
1425
+ 715
1426
+ 2182
1427
+ 1041
1428
+ 1231
1429
+ 2428
1430
+ 1494
1431
+ 1078
1432
+ 1380
1433
+ 2621
1434
+ 2805
1435
+ 154
1436
+ 1388
1437
+ 1331
1438
+ 1444
1439
+ 2269
1440
+ 2392
1441
+ 135
1442
+ 142
1443
+ 498
1444
+ 2722
1445
+ 1303
1446
+ 2372
1447
+ 1354
1448
+ 2401
1449
+ 1497
1450
+ 2308
1451
+ 2980
1452
+ 150
1453
+ 2153
1454
+ 2899
1455
+ 1742
1456
+ 2702
1457
+ 525
1458
+ 1420
1459
+ 1480
1460
+ 2438
1461
+ 1153
1462
+ 1935
1463
+ 2398
1464
+ 2902
1465
+ 1567
1466
+ 2844
1467
+ 3183
1468
+ 631
1469
+ 677
1470
+ 1036
1471
+ 1814
1472
+ 2008
1473
+ 2236
1474
+ 1990
1475
+ 2163
1476
+ 2453
1477
+ 509
1478
+ 1976
1479
+ 2454
1480
+ 3066
1481
+ 3067
1482
+ 1028
1483
+ 2018
1484
+ 1777
1485
+ 1526
1486
+ 959
1487
+ 2284
1488
+ 1642
1489
+ 741
1490
+ 2626
1491
+ 3083
1492
+ 2100
1493
+ 89
1494
+ 174
1495
+ 128
1496
+ 731
1497
+ 3172
1498
+ 1011
1499
+ 2588
1500
+ 316
1501
+ 2007
1502
+ 2293
1503
+ 1489
1504
+ 2459
1505
+ 1834
1506
+ 2896
1507
+ 71
1508
+ 2406
1509
+ 2667
1510
+ 2811
1511
+ 1170
1512
+ 2443
1513
+ 2753
1514
+ 735
1515
+ 1291
1516
+ 1386
1517
+ 1562
1518
+ 2357
1519
+ 2694
1520
+ 1586
1521
+ 333
1522
+ 1201
1523
+ 1575
1524
+ 1725
1525
+ 1872
1526
+ 2734
1527
+ 1224
1528
+ 1403
1529
+ 2077
1530
+ 2664
1531
+ 58
1532
+ 2440
1533
+ 2657
1534
+ 2718
1535
+ 2976
1536
+ 76
1537
+ 2522
1538
+ 200
1539
+ 1845
1540
+ 2239
1541
+ 172
1542
+ 1418
1543
+ 1671
1544
+ 1903
1545
+ 2679
1546
+ 2577
1547
+ 1815
1548
+ 637
1549
+ 1555
1550
+ 1609
1551
+ 2879
1552
+ 329
1553
+ 2097
1554
+ 839
1555
+ 1297
1556
+ 1478
1557
+ 586
1558
+ 2184
1559
+ 1556
1560
+ 149
1561
+ 1704
1562
+ 377
1563
+ 56
1564
+ 572
1565
+ 716
1566
+ 2260
1567
+ 1027
1568
+ 1807
1569
+ 1095
1570
+ 1727
1571
+ 3000
1572
+ 660
1573
+ 3242
1574
+ 1580
1575
+ 630
1576
+ 1895
1577
+ 2001
1578
+ 2252
1579
+ 2711
1580
+ 1257
1581
+ 857
1582
+ 1373
1583
+ 1461
1584
+ 2466
1585
+ 3100
1586
+ 563
1587
+ 3185
1588
+ 1165
1589
+ 2724
1590
+ 1326
1591
+ 1886
1592
+ 2142
1593
+ 1766
1594
+ 1746
1595
+ 1808
1596
+ 808
1597
+ 881
1598
+ 2290
1599
+ 538
1600
+ 974
1601
+ 921
1602
+ 2520
1603
+ 1010
1604
+ 2084
1605
+ 2497
1606
+ 765
1607
+ 3096
1608
+ 1396
1609
+ 1210
1610
+ 1946
1611
+ 2160
1612
+ 2277
1613
+ 1029
1614
+ 1941
1615
+ 1754
1616
+ 555
1617
+ 643
1618
+ 1217
1619
+ 2412
1620
+ 2653
1621
+ 3085
1622
+ 1636
1623
+ 2931
1624
+ 2404
1625
+ 3223
1626
+ 2597
1627
+ 775
1628
+ 2961
1629
+ 609
1630
+ 1877
1631
+ 180
1632
+ 2105
1633
+ 1614
1634
+ 382
1635
+ 1718
1636
+ 690
1637
+ 243
1638
+ 674
1639
+ 2312
1640
+ 2441
1641
+ 99
1642
+ 2470
1643
+ 1939
1644
+ 2918
1645
+ 3091
1646
+ 1788
1647
+ 624
1648
+ 1361
1649
+ 2763
1650
+ 2265
1651
+ 1014
1652
+ 2468
1653
+ 1398
1654
+ 1588
1655
+ 2259
1656
+ 369
1657
+ 2728
1658
+ 2971
1659
+ 3211
1660
+ 866
1661
+ 2686
1662
+ 671
1663
+ 2258
1664
+ 1123
1665
+ 407
1666
+ 646
1667
+ 2170
1668
+ 2212
1669
+ 2962
1670
+ 1768
1671
+ 1789
1672
+ 713
1673
+ 1504
1674
+ 1694
1675
+ 2276
1676
+ 456
1677
+ 2328
1678
+ 3134
1679
+ 1301
1680
+ 2353
1681
+ 3094
1682
+ 148
1683
+ 784
1684
+ 2196
1685
+ 2232
1686
+ 2540
1687
+ 564
1688
+ 1376
1689
+ 950
1690
+ 410
1691
+ 2186
1692
+ 455
1693
+ 790
1694
+ 970
1695
+ 813
1696
+ 2426
1697
+ 2652
1698
+ 2951
1699
+ 1748
1700
+ 3003
1701
+ 985
1702
+ 1148
1703
+ 2003
1704
+ 3224
1705
+ 275
1706
+ 2469
1707
+ 1983
1708
+ 1086
1709
+ 2123
1710
+ 2224
1711
+ 2648
1712
+ 1248
1713
+ 516
1714
+ 1468
1715
+ 3196
1716
+ 2527
1717
+ 1323
1718
+ 465
1719
+ 1390
1720
+ 1653
1721
+ 319
1722
+ 183
1723
+ 956
1724
+ 1979
1725
+ 1048
1726
+ 1442
1727
+ 123
1728
+ 2179
1729
+ 2534
1730
+ 1869
1731
+ 1051
1732
+ 1899
1733
+ 549
1734
+ 2140
1735
+ 2864
1736
+ 2481
1737
+ 2967
1738
+ 744
1739
+ 1116
1740
+ 2013
1741
+ 2462
1742
+ 3014
1743
+ 517
1744
+ 1486
1745
+ 1795
1746
+ 2585
1747
+ 321
1748
+ 1530
1749
+ 1925
1750
+ 1767
1751
+ 1608
1752
+ 533
1753
+ 1629
1754
+ 3018
1755
+ 392
1756
+ 920
1757
+ 1214
1758
+ 2115
1759
+ 981
1760
+ 2316
1761
+ 15
1762
+ 112
1763
+ 2256
1764
+ 1131
1765
+ 1002
1766
+ 2581
1767
+ 1058
1768
+ 1631
1769
+ 1676
1770
+ 2009
1771
+ 2563
1772
+ 689
1773
+ 1992
1774
+ 1922
1775
+ 348
1776
+ 2478
1777
+ 3016
1778
+ 1989
1779
+ 2936
1780
+ 2024
1781
+ 1715
1782
+ 2214
1783
+ 2783
1784
+ 219
1785
+ 1938
1786
+ 1643
1787
+ 917
1788
+ 1910
1789
+ 221
1790
+ 669
1791
+ 2629
1792
+ 995
1793
+ 1295
1794
+ 1964
1795
+ 280
1796
+ 698
1797
+ 1852
1798
+ 3158
1799
+ 1228
1800
+ 978
1801
+ 469
1802
+ 2203
1803
+ 3181
1804
+ 987
1805
+ 2545
1806
+ 3105
1807
+ 766
1808
+ 1278
1809
+ 80
1810
+ 85
1811
+ 318
1812
+ 1670
1813
+ 2335
1814
+ 1977
1815
+ 979
1816
+ 1422
1817
+ 2064
1818
+ 2930
1819
+ 223
1820
+ 1690
1821
+ 2422
1822
+ 934
1823
+ 2654
1824
+ 2519
1825
+ 2922
1826
+ 3043
1827
+ 33
1828
+ 193
1829
+ 583
1830
+ 2340
1831
+ 626
1832
+ 3143
1833
+ 1821
1834
+ 2240
1835
+ 1044
1836
+ 1098
1837
+ 1179
1838
+ 38
1839
+ 1553
1840
+ 1501
1841
+ 3102
1842
+ 3140
1843
+ 1716
1844
+ 229
1845
+ 457
1846
+ 2900
1847
+ 406
1848
+ 3095
1849
+ 435
1850
+ 1416
1851
+ 2420
1852
+ 3210
1853
+ 1137
1854
+ 1535
1855
+ 2884
1856
+ 1207
1857
+ 2878
1858
+ 192
1859
+ 1678
1860
+ 1902
1861
+ 2346
1862
+ 537
1863
+ 1753
1864
+ 2452
1865
+ 1199
1866
+ 1728
1867
+ 796
1868
+ 1046
1869
+ 443
1870
+ 1581
1871
+ 2574
1872
+ 1065
1873
+ 1522
1874
+ 3012
1875
+ 1164
1876
+ 1867
1877
+ 2727
1878
+ 198
1879
+ 477
1880
+ 571
1881
+ 361
1882
+ 2503
1883
+ 2704
1884
+ 2975
1885
+ 502
1886
+ 3179
1887
+ 1833
1888
+ 1927
1889
+ 2359
1890
+ 1102
1891
+ 2351
1892
+ 2756
1893
+ 423
1894
+ 1146
1895
+ 2764
1896
+ 1532
1897
+ 155
1898
+ 2620
1899
+ 1187
1900
+ 1757
1901
+ 1781
1902
+ 914
1903
+ 2771
1904
+ 569
1905
+ 1687
1906
+ 464
1907
+ 2994
1908
+ 693
1909
+ 1475
1910
+ 2246
1911
+ 2842
1912
+ 171
1913
+ 795
1914
+ 1204
1915
+ 1618
1916
+ 176
1917
+ 2712
1918
+ 2799
1919
+ 3074
1920
+ 2500
1921
+ 3108
1922
+ 1252
1923
+ 1930
1924
+ 294
1925
+ 1485
1926
+ 1663
1927
+ 2622
1928
+ 1888
1929
+ 1900
1930
+ 2373
1931
+ 453
1932
+ 1702
1933
+ 2317
1934
+ 1133
1935
+ 2051
1936
+ 1417
1937
+ 3035
1938
+ 2925
1939
+ 125
1940
+ 2607
1941
+ 1162
1942
+ 1384
1943
+ 3006
1944
+ 1220
1945
+ 748
1946
+ 2567
1947
+ 696
1948
+ 1647
1949
+ 1582
1950
+ 1566
1951
+ 1965
1952
+ 2033
1953
+ 44
1954
+ 553
1955
+ 756
1956
+ 1337
1957
+ 2692
1958
+ 3207
1959
+ 94
1960
+ 1920
1961
+ 878
1962
+ 1559
1963
+ 2378
1964
+ 2632
1965
+ 139
1966
+ 2606
1967
+ 2908
1968
+ 1774
1969
+ 2331
1970
+ 236
1971
+ 535
1972
+ 2176
1973
+ 2882
1974
+ 904
1975
+ 2238
1976
+ 3045
1977
+ 1969
1978
+ 794
1979
+ 661
1980
+ 2114
1981
+ 310
1982
+ 473
1983
+ 1352
1984
+ 1691
1985
+ 1875
1986
+ 53
1987
+ 1372
1988
+ 1554
1989
+ 2347
1990
+ 554
1991
+ 194
1992
+ 2786
1993
+ 387
1994
+ 635
1995
+ 2591
1996
+ 546
1997
+ 188
1998
+ 2148
1999
+ 2294
2000
+ 2927
2001
+ 1401
2002
+ 2965
2003
+ 2098
2004
+ 1302
2005
+ 3156
2006
+ 442
2007
+ 1987
2008
+ 592
2009
+ 1758
2010
+ 2283
2011
+ 2292
2012
+ 887
2013
+ 1874
2014
+ 337
2015
+ 830
2016
+ 1491
2017
+ 2133
2018
+ 311
2019
+ 451
2020
+ 1528
2021
+ 2480
2022
+ 2223
2023
+ 2403
2024
+ 1739
2025
+ 1232
2026
+ 347
2027
+ 366
2028
+ 405
2029
+ 1688
2030
+ 3055
2031
+ 1577
2032
+ 2000
2033
+ 758
2034
+ 380
2035
+ 752
2036
+ 1459
2037
+ 2714
2038
+ 2555
2039
+ 1211
2040
+ 1856
2041
+ 3001
2042
+ 1508
2043
+ 1600
2044
+ 1926
2045
+ 162
2046
+ 738
2047
+ 159
2048
+ 2956
2049
+ 688
2050
+ 872
2051
+ 2157
2052
+ 615
2053
+ 2082
2054
+ 1109
2055
+ 1731
2056
+ 1238
2057
+ 1182
2058
+ 1733
2059
+ 434
2060
+ 2360
2061
+ 670
2062
+ 1936
2063
+ 2721
2064
+ 1462
2065
+ 2550
2066
+ 2578
2067
+ 238
2068
+ 1649
2069
+ 3228
2070
+ 1066
2071
+ 2570
2072
+ 2603
2073
+ 2348
2074
+ 1206
2075
+ 438
2076
+ 2635
2077
+ 1447
2078
+ 1607
2079
+ 1967
2080
+ 2444
2081
+ 622
2082
+ 1229
2083
+ 2190
2084
+ 2197
2085
+ 702
2086
+ 1008
2087
+ 2628
2088
+ 3217
2089
+ 965
2090
+ 1282
2091
+ 2979
2092
+ 1000
2093
+ 2509
2094
+ 266
2095
+ 1075
2096
+ 116
2097
+ 177
2098
+ 1251
2099
+ 3041
2100
+ 388
2101
+ 883
2102
+ 1267
2103
+ 2395
2104
+ 983
2105
+ 284
2106
+ 2434
2107
+ 2755
2108
+ 1592
2109
+ 2840
2110
+ 856
2111
+ 1585
2112
+ 1166
2113
+ 2377
2114
+ 1723
2115
+ 1860
2116
+ 2218
2117
+ 1481
2118
+ 1857
2119
+ 620
2120
+ 771
2121
+ 1087
2122
+ 3010
2123
+ 1093
2124
+ 2436
2125
+ 364
2126
+ 971
2127
+ 2052
2128
+ 1548
2129
+ 2919
2130
+ 306
2131
+ 1879
2132
+ 2319
2133
+ 2605
2134
+ 1124
2135
+ 955
2136
+ 967
2137
+ 463
2138
+ 2255
2139
+ 3184
2140
+ 2867
2141
+ 2592
2142
+ 2687
2143
+ 325
2144
+ 1830
2145
+ 1091
2146
+ 120
2147
+ 1110
2148
+ 1855
2149
+ 3229
2150
+ 186
2151
+ 1213
2152
+ 1892
2153
+ 141
2154
+ 2117
2155
+ 2425
2156
+ 2433
2157
+ 2625
2158
+ 21
2159
+ 2644
2160
+ 1973
2161
+ 2797
2162
+ 1409
2163
+ 2227
2164
+ 1322
2165
+ 1404
2166
+ 2313
2167
+ 1775
2168
+ 996
2169
+ 2151
2170
+ 2081
2171
+ 633
2172
+ 740
2173
+ 1250
2174
+ 1640
2175
+ 2055
2176
+ 2726
2177
+ 1156
2178
+ 1200
2179
+ 1498
2180
+ 1340
2181
+ 1479
2182
+ 659
2183
+ 2666
2184
+ 2754
2185
+ 300
2186
+ 408
2187
+ 309
2188
+ 638
2189
+ 3141
2190
+ 237
2191
+ 903
2192
+ 1379
2193
+ 665
2194
+ 1878
2195
+ 2507
2196
+ 2710
2197
+ 3103
2198
+ 528
2199
+ 1510
2200
+ 2414
2201
+ 1017
2202
+ 1263
2203
+ 2909
2204
+ 376
2205
+ 151
2206
+ 1183
2207
+ 1755
2208
+ 2959
2209
+ 649
2210
+ 259
2211
+ 305
2212
+ 1434
2213
+ 2818
2214
+ 2458
2215
+ 1025
2216
+ 1644
2217
+ 1712
2218
+ 727
2219
+ 2883
2220
+ 658
2221
+ 724
2222
+ 1978
2223
+ 944
2224
+ 1242
2225
+ 1273
2226
+ 2221
2227
+ 356
2228
+ 491
2229
+ 1144
2230
+ 1240
2231
+ 801
2232
+ 1745
2233
+ 3015
2234
+ 230
2235
+ 2022
2236
+ 3089
2237
+ 817
2238
+ 1594
2239
+ 1730
2240
+ 570
2241
+ 1117
2242
+ 1088
2243
+ 1421
2244
+ 2943
2245
+ 534
2246
+ 587
2247
+ 743
2248
+ 179
2249
+ 1463
2250
+ 601
2251
+ 1013
2252
+ 1389
2253
+ 1047
2254
+ 1108
2255
+ 2854
2256
+ 3114
2257
+ 2442
2258
+ 2598
2259
+ 2824
2260
+ 2912
2261
+ 2034
2262
+ 2643
2263
+ 3047
2264
+ 2231
2265
+ 2551
2266
+ 332
2267
+ 1578
2268
+ 181
2269
+ 419
2270
+ 1632
2271
+ 201
2272
+ 1994
2273
+ 2016
2274
+ 2137
2275
+ 268
2276
+ 1193
2277
+ 1932
2278
+ 1157
2279
+ 2614
2280
+ 820
2281
+ 783
2282
+ 1009
2283
+ 2803
2284
+ 170
2285
+ 1408
2286
+ 1993
2287
+ 2876
2288
+ 2280
2289
+ 254
2290
+ 976
2291
+ 745
2292
+ 994
2293
+ 2851
2294
+ 4
2295
+ 1738
2296
+ 1074
2297
+ 2477
2298
+ 1335
2299
+ 1030
2300
+ 1570
2301
+ 2874
2302
+ 1778
2303
+ 1247
2304
+ 1470
2305
+ 1243
2306
+ 913
2307
+ 101
2308
+ 61
2309
+ 355
2310
+ 1085
2311
+ 1736
2312
+ 368
2313
+ 1995
2314
+ 1662
2315
+ 952
2316
+ 3053
2317
+ 1410
2318
+ 448
2319
+ 1669
2320
+ 2058
2321
+ 1423
2322
+ 482
2323
+ 3026
2324
+ 1190
2325
+ 2609
2326
+ 1539
2327
+ 374
2328
+ 1111
2329
+ 2147
2330
+ 2415
2331
+ 49
2332
+ 628
2333
+ 1181
2334
+ 2460
2335
+ 2525
2336
+ 2881
2337
+ 681
2338
+ 2871
2339
+ 2445
2340
+ 1128
2341
+ 1077
2342
+ 1119
2343
+ 1963
2344
+ 2938
2345
+ 2326
2346
+ 650
2347
+ 2793
2348
+ 504
2349
+ 2618
2350
+ 160
2351
+ 444
2352
+ 708
2353
+ 992
2354
+ 2241
2355
+ 3086
2356
+ 901
2357
+ 2612
2358
+ 397
2359
+ 471
2360
+ 1824
2361
+ 330
2362
+ 1680
2363
+ 1261
2364
+ 2596
2365
+ 299
2366
+ 1513
2367
+ 1792
2368
+ 1001
2369
+ 1439
2370
+ 2917
2371
+ 1310
2372
+ 990
2373
+ 1428
2374
+ 379
2375
+ 1651
2376
+ 1842
2377
+ 2887
2378
+ 367
2379
+ 2974
2380
+ 3036
2381
+ 1185
2382
+ 461
2383
+ 2095
2384
+ 868
2385
+ 585
2386
+ 835
2387
+ 2611
2388
+ 1933
2389
+ 2061
2390
+ 352
2391
+ 1502
2392
+ 705
2393
+ 1350
2394
+ 1844
2395
+ 1246
2396
+ 1838
2397
+ 2032
2398
+ 1129
2399
+ 1254
2400
+ 3148
2401
+ 2913
2402
+ 432
2403
+ 910
2404
+ 1387
2405
+ 1035
2406
+ 2511
2407
+ 1465
2408
+ 1981
2409
+ 2047
2410
+ 2952
2411
+ 1782
2412
+ 1637
2413
+ 2315
2414
+ 3109
2415
+ 173
2416
+ 799
2417
+ 876
2418
+ 2180
2419
+ 394
2420
+ 1722
2421
+ 1817
2422
+ 1947
2423
+ 1071
2424
+ 2215
2425
+ 2235
2426
+ 1952
2427
+ 22
2428
+ 886
2429
+ 1837
2430
+ 2828
2431
+ 2872
2432
+ 2993
2433
+ 2010
2434
+ 1122
2435
+ 3027
2436
+ 634
2437
+ 822
2438
+ 777
2439
+ 2408
2440
+ 3017
2441
+ 2863
2442
+ 2996
2443
+ 1155
2444
+ 2388
2445
+ 467
2446
+ 104
2447
+ 1189
2448
+ 431
2449
+ 2875
2450
+ 3209
2451
+ 1355
2452
+ 3175
2453
+ 1639
2454
+ 354
2455
+ 67
2456
+ 686
2457
+ 350
2458
+ 2948
2459
+ 3013
2460
+ 2070
2461
+ 1054
2462
+ 1685
2463
+ 1382
2464
+ 2933
2465
+ 108
2466
+ 2827
2467
+ 2892
2468
+ 1351
2469
+ 1452
2470
+ 611
2471
+ 1744
2472
+ 1749
2473
+ 126
2474
+ 2903
2475
+ 2467
2476
+ 1811
2477
+ 454
2478
+ 2688
2479
+ 2272
2480
+ 2914
2481
+ 459
2482
+ 3121
2483
+ 2037
2484
+ 616
2485
+ 962
2486
+ 1318
2487
+ 946
2488
+ 2963
2489
+ 1135
2490
+ 1949
2491
+ 2583
2492
+ 3039
2493
+ 69
2494
+ 1202
2495
+ 425
2496
+ 1169
2497
+ 1894
2498
+ 2524
2499
+ 1298
2500
+ 127
2501
+ 2866
2502
+ 277
2503
+ 829
2504
+ 2984
2505
+ 663
2506
+ 1912
2507
+ 828
2508
+ 2495
2509
+ 2531
2510
+ 1143
2511
+ 1039
2512
+ 550
2513
+ 1139
2514
+ 2758
2515
+ 2311
2516
+ 2375
2517
+ 168
2518
+ 924
2519
+ 1321
2520
+ 1005
2521
+ 1802
2522
+ 2020
2523
+ 858
2524
+ 591
2525
+ 2932
2526
+ 175
2527
+ 378
2528
+ 1850
2529
+ 2604
2530
+ 1045
2531
+ 1053
2532
+ 1097
2533
+ 1180
2534
+ 1787
2535
+ 1059
2536
+ 1633
2537
+ 2473
2538
+ 2649
2539
+ 2972
2540
+ 2969
2541
+ 1751
2542
+ 2895
2543
+ 540
2544
+ 2076
2545
+ 1249
2546
+ 2200
2547
+ 1052
2548
+ 574
2549
+ 1433
2550
+ 2273
2551
+ 943
2552
+ 798
2553
+ 3090
2554
+ 733
2555
+ 1483
2556
+ 1290
2557
+ 1948
2558
+ 1942
2559
+ 2906
2560
+ 675
2561
+ 3120
2562
+ 242
2563
+ 1511
2564
+ 2355
2565
+ 1707
2566
+ 648
2567
+ 484
2568
+ 1319
2569
+ 3028
2570
+ 527
2571
+ 3030
2572
+ 2559
2573
+ 3145
2574
+ 2613
2575
+ 726
2576
+ 422
2577
+ 2910
2578
+ 66
2579
+ 169
2580
+ 1026
2581
+ 1868
2582
+ 2262
2583
+ 1484
2584
+ 1557
2585
+ 1482
2586
+ 2341
2587
+ 1917
2588
+ 95
2589
+ 1654
2590
+ 130
2591
+ 2542
2592
+ 1429
2593
+ 1270
2594
+ 428
2595
+ 2920
2596
+ 3151
2597
+ 3186
2598
+ 1061
2599
+ 107
2600
+ 825
2601
+ 1159
2602
+ 760
2603
+ 1042
2604
+ 1019
2605
+ 1234
2606
+ 1266
2607
+ 3073
2608
+ 2286
2609
+ 437
2610
+ 1907
2611
+ 2812
2612
+ 334
2613
+ 1103
2614
+ 847
2615
+ 2977
2616
+ 2113
2617
+ 2719
2618
+ 2174
2619
+ 2175
2620
+ 1574
2621
+ 1793
2622
+ 1839
2623
+ 2579
2624
+ 2210
2625
+ 590
2626
+ 1889
2627
+ 480
2628
+ 1022
2629
+ 3234
2630
+ 2400
2631
+ 1138
2632
+ 496
2633
+ 507
2634
+ 2839
2635
+ 2250
2636
+ 2587
2637
+ 389
2638
+ 695
2639
+ 481
2640
+ 2471
2641
+ 2568
2642
+ 1448
2643
+ 2547
2644
+ 1937
2645
+ 1550
2646
+ 373
2647
+ 2448
2648
+ 629
2649
+ 1493
2650
+ 2019
2651
+ 3191
2652
+ 1435
2653
+ 1713
2654
+ 2576
2655
+ 365
2656
+ 2044
2657
+ 1285
2658
+ 1327
2659
+ 993
2660
+ 687
2661
+ 2641
2662
+ 1188
2663
+ 596
2664
+ 1092
2665
+ 1235
2666
+ 24
2667
+ 2877
2668
+ 1740
2669
+ 598
2670
+ 1503
2671
+ 2043
2672
+ 832
2673
+ 1543
2674
+ 1772
2675
+ 1876
2676
+ 2616
2677
+ 2303
2678
+ 96
2679
+ 483
2680
+ 1225
2681
+ 2219
2682
+ 684
2683
+ 3123
2684
+ 989
2685
+ 490
2686
+ 518
2687
+ 1106
2688
+ 3139
2689
+ 941
2690
+ 2528
2691
+ 2985
2692
+ 1118
2693
+ 1825
2694
+ 1149
2695
+ 1891
2696
+ 1908
2697
+ 980
2698
+ 1488
2699
+ 87
2700
+ 511
2701
+ 1244
2702
+ 651
2703
+ 1882
2704
+ 2023
2705
+ 2455
2706
+ 1198
2707
+ 608
2708
+ 1037
2709
+ 1896
2710
+ 1145
2711
+ 245
2712
+ 984
2713
+ 1172
2714
+ 2116
2715
+ 420
2716
+ 593
2717
+ 843
2718
+ 2271
2719
+ 2848
2720
+ 2039
2721
+ 1152
2722
+ 2499
2723
+ 226
2724
+ 328
2725
+ 386
2726
+ 1397
2727
+ 746
2728
+ 958
2729
+ 1366
2730
+ 2306
2731
+ 110
2732
+ 1346
2733
+ 2901
2734
+ 1096
2735
+ 704
2736
+ 1858
2737
+ 2530
2738
+ 2929
2739
+ 2619
2740
+ 2132
2741
+ 2831
2742
+ 2995
2743
+ 1881
2744
+ 370
2745
+ 470
2746
+ 582
2747
+ 2368
2748
+ 1576
2749
+ 536
2750
+ 1163
2751
+ 1810
2752
+ 1859
2753
+ 2898
2754
+ 500
2755
+ 785
2756
+ 1134
2757
+ 2490
2758
+ 1458
2759
+ 2325
2760
+ 603
2761
+ 1313
2762
+ 2109
2763
+ 488
2764
+ 2374
2765
+ 2894
2766
+ 1984
2767
+ 2599
2768
+ 911
2769
+ 1464
2770
+ 2465
2771
+ 449
2772
+ 927
2773
+ 2502
2774
+ 963
2775
+ 3202
2776
+ 2860
2777
+ 1300
2778
+ 1221
2779
+ 2387
2780
+ 600
2781
+ 3110
2782
+ 1356
2783
+ 312
2784
+ 2060
2785
+ 381
2786
+ 2096
2787
+ 2295
2788
+ 114
2789
+ 2893
2790
+ 2861
2791
+ 697
2792
+ 2837
2793
+ 531
2794
+ 899
2795
+ 2514
2796
+ 1365
2797
+ 2146
2798
+ 1968
2799
+ 1115
2800
+ 458
2801
+ 2035
2802
+ 2668
2803
+ 557
2804
+ 1473
2805
+ 272
2806
+ 447
2807
+ 418
2808
+ 503
2809
+ 2713
2810
+ 1271
2811
+ 1343
2812
+ 2150
2813
+ 861
2814
+ 3203
2815
+ 871
2816
+ 1534
2817
+ 2304
2818
+ 2508
2819
+ 3118
2820
+ 401
2821
+ 3136
2822
+ 2703
2823
+ 383
2824
+ 1142
2825
+ 1563
2826
+ 1099
2827
+ 2409
2828
+ 865
2829
+ 1374
2830
+ 2233
2831
+ 2538
2832
+ 2889
2833
+ 2565
2834
+ 1184
2835
+ 2430
2836
+ 2389
2837
+ 1140
2838
+ 1215
2839
+ 1901
2840
+ 167
2841
+ 430
2842
+ 2790
2843
+ 1771
2844
+ 1828
2845
+ 1524
2846
+ 2830
2847
+ 2833
2848
+ 543
2849
+ 2161
2850
+ 1079
2851
+ 2040
2852
+ 2289
2853
+ 2099
2854
+ 115
2855
+ 140
2856
+ 761
2857
+ 413
2858
+ 2627
2859
+ 3049
2860
+ 2030
2861
+ 2050
2862
+ 3115
2863
+ 1861
2864
+ 2394
2865
+ 918
2866
+ 323
2867
+ 157
2868
+ 3236
2869
+ 345
2870
+ 1132
2871
+ 31
2872
+ 2890
2873
+ 2094
2874
+ 515
2875
+ 1264
2876
+ 1127
2877
+ 1412
2878
+ 816
2879
+ 1406
2880
+ 1835
2881
+ 1741
2882
+ 1883
2883
+ 2479
2884
+ 1806
2885
+ 2832
2886
+ 823
2887
+ 1635
2888
+ 2483
2889
+ 1258
2890
+ 191
2891
+ 1141
2892
+ 2474
2893
+ 833
2894
+ 1477
2895
+ 1176
2896
+ 3144
2897
+ 1913
2898
+ 1154
2899
+ 489
2900
+ 228
2901
+ 2362
2902
+ 1080
2903
+ 2517
2904
+ 2834
2905
+ 2957
2906
+ 682
2907
+ 2506
2908
+ 1840
2909
+ 414
2910
+ 2350
2911
+ 163
2912
+ 657
2913
+ 1114
2914
+ 1194
2915
+ 2080
2916
+ 2838
2917
+ 1269
2918
+ 1487
2919
+ 3084
2920
+ 877
2921
+ 1226
2922
+ 1195
2923
+ 1591
2924
+ 1975
2925
+ 2463
2926
+ 1413
2927
+ 2014
2928
+ 1655
2929
+ 2243
2930
+ 138
2931
+ 1719
2932
+ 2322
2933
+ 1003
2934
+ 2488
2935
+ 2964
2936
+ 2446
2937
+ 678
2938
+ 964
2939
+ 2997
2940
+ 109
2941
+ 3068
2942
+ 656
2943
+ 2323
2944
+ 1259
2945
+ 113
2946
+ 1809
2947
+ 864
2948
+ 526
2949
+ 2492
2950
+ 1171
2951
+ 1255
2952
+ 1955
2953
+ 2915
2954
+ 2862
2955
+ 819
2956
+ 1759
2957
+ 403
2958
+ 1276
2959
+ 1173
2960
+ 641
2961
+ 2192
2962
+ 575
2963
+ 810
2964
+ 807
2965
+ 133
2966
+ 1174
2967
+ 875
2968
+ 1568
2969
+ 2031
2970
+ 111
2971
+ 2475
2972
+ 1438
2973
+ 2106
2974
+ 472
2975
+ 1177
2976
+ 636
2977
+ 667
2978
+ 1537
2979
+ 1816
2980
+ 2858
2981
+ 2421
2982
+ 1790
2983
+ 1549
2984
+ 610
2985
+ 2835
2986
+ 2546
2987
+ 2049
2988
+ 2870
2989
+ 852
2990
+ 1597
2991
+ 2880
2992
+ 18
2993
+ 2911
2994
+ 2059
2995
+ 1168
2996
+ 602
2997
+ 1161
2998
+ 926
2999
+ 1160
3000
+ 2885
3001
+ 3171
3002
+ 846
3003
+ 2222
3004
+ 362
3005
+ 1262
3006
+ 2504
3007
+ 440
3008
+ 1786
3009
+ 2675
3010
+ 3218
3011
+ 2028
3012
+ 1186
3013
+ 1752
3014
+ 2610
3015
+ 2268
3016
+ 476
3017
+ 2886
3018
+ 2891
3019
+ 909
3020
+ 1826
3021
+ 2859
3022
+ 860
3023
+ 1317
3024
+ 97
3025
+ 2586
3026
+ 2021
3027
+ 1057
3028
+ 2124
3029
+ 2327
3030
+ 837
3031
+ 2774
3032
+ 2025
3033
+ 850
3034
+ 2847
3035
+ 1813
3036
+ 1721
3037
+ 1344
3038
+ 178
3039
+ 642
3040
+ 1178
3041
+ 884
3042
+ 1100
3043
+ 2855
3044
+ 1167
3045
+ 524
3046
+ 885
3047
+ 736
3048
+ 1391
3049
+ 153
3050
+ 1375
3051
+ 417
3052
+ 343
3053
+ 298
3054
+ 2330
3055
+ 3107
3056
+ 1104
3057
+ 1311
3058
+ 2907
3059
+ 106
3060
+ 1720
3061
+ 372
3062
+ 1191
3063
+ 2427
3064
+ 218
3065
+ 1571
3066
+ 2658
3067
+ 132
3068
+ 2449
3069
+ 1192
3070
+ 2383
3071
+ 1349
3072
+ 821
3073
+ 1105
3074
+ 898
3075
+ 3060
3076
+ 1546
3077
+ 2845
3078
+ 1970
3079
+ 666
3080
+ 2069
3081
+ 867
3082
+ 2747
3083
+ 1689
3084
+ 1049
3085
+ 2868
3086
+ 2821
3087
+ 960
3088
+ 1454
3089
+ 1780
3090
+ 1348
3091
+ 1158
3092
+ 2496
3093
+ 2595
3094
+ 803
3095
+ 1956
3096
+ 404
3097
+ 156
3098
+ 395
3099
+ 2228
3100
+ 100
3101
+ 977
3102
+ 640
3103
+ 1339
3104
+ 840
3105
+ 680
3106
+ 2079
3107
+ 1769
3108
+ 2027
3109
+ 1230
3110
+ 2158
3111
+ 836
3112
+ 2136
3113
+ 1467
3114
+ 1515
3115
+ 2850
3116
+ 2202
3117
+ 2159
3118
+ 3024
3119
+ 508
3120
+ 1492
3121
+ 654
3122
+ 897
3123
+ 1284
3124
+ 27
3125
+ 2046
3126
+ 3161
3127
+ 2423
3128
+ 2873
3129
+ 1072
3130
+ 2329
3131
+ 1196
3132
+ 3052
3133
+ 896
3134
+ 2905
3135
+ 879
3136
+ 2843
3137
+ 1150
3138
+ 1175
3139
+ 2091
3140
+ 497
3141
+ 2017
3142
+ 402
3143
+ 2780
3144
+ 2431
3145
+ 2822
3146
+ 1068
3147
+ 390
3148
+ 3106
3149
+ 2823
3150
+ 2846
3151
+ 357
3152
+ 3081
3153
+ 3216
3154
+ 433
3155
+ 1934
3156
+ 845
3157
+ 2857
3158
+ 915
3159
+ 776
3160
+ 988
3161
+ 811
3162
+ 2810
3163
+ 1451
3164
+ 547
3165
+ 2172
3166
+ 581
3167
+ 2071
3168
+ 2004
3169
+ 314
3170
+ 855
3171
+ 2869
3172
+ 2651
3173
+ 1496
3174
+ 863
3175
+ 2418
3176
+ 3176
3177
+ 2216
3178
+ 2491
3179
+ 2852
3180
+ 2888
3181
+ 893
3182
+ 2826
3183
+ 2841
3184
+ 1024
3185
+ 1466
3186
+ 315
3187
+ 1328
3188
+ 806
3189
+ 812
3190
+ 804
3191
+ 1055
3192
+ 891
3193
+ 1345
3194
+ 1411
3195
+ 1381
3196
+ 848
3197
+ 814
3198
+ 1216
3199
+ 2916
3200
+ 1765
3201
+ 1921
3202
+ 23
3203
+ 859
3204
+ 815
3205
+ 2045
3206
+ 3208
3207
+ 2631
3208
+ 895
3209
+ 2849
3210
+ 874
3211
+ 809
3212
+ 1474
3213
+ 824
3214
+ 2263
3215
+ 2435
3216
+ 2144
3217
+ 805
3218
+ 882
3219
+ 2921
3220
+ 873
3221
+ 2086
3222
+ 152
3223
+ 2118
3224
+ 1460
3225
+ 892
3226
+ 1960
3227
+ 2825
3228
+ 1803
3229
+ 2291
3230
+ 1601
3231
+ 1675
3232
+ 889
3233
+ 1747
3234
+ 1593
3235
+ 1779
3236
+ 2257
3237
+ 3165
3238
+ 673
3239
+ 802
3240
+ 1525
3241
+ 870
3242
+ 2904
3243
+ 190
metric_test_0_shot_100_ar.txt ADDED
The diff for this file is too large to render. See raw diff
 
metric_test_0_shot_100_cp.txt ADDED
The diff for this file is too large to render. See raw diff
 
metric_test_0_shot_100_crec.txt ADDED
The diff for this file is too large to render. See raw diff
 
metric_test_10_shot_100_ar.txt ADDED
The diff for this file is too large to render. See raw diff
 
metric_test_10_shot_100_cp.txt ADDED
The diff for this file is too large to render. See raw diff
 
metric_test_10_shot_100_crec.txt ADDED
The diff for this file is too large to render. See raw diff
 
metric_test_1_shot_100_ar.txt ADDED
The diff for this file is too large to render. See raw diff
 
metric_test_1_shot_100_cp.txt ADDED
The diff for this file is too large to render. See raw diff
 
metric_test_1_shot_100_crec.txt ADDED
The diff for this file is too large to render. See raw diff
 
metric_test_5_shot_100_ar.txt ADDED
The diff for this file is too large to render. See raw diff
 
metric_test_5_shot_100_cp.txt ADDED
The diff for this file is too large to render. See raw diff
 
metric_test_5_shot_100_crec.txt ADDED
The diff for this file is too large to render. See raw diff
 
mse_deepeval_dataset.py ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import jsonlines
3
+ import json
4
+ # from deepeval.scorer import Scorer
5
+ from deepeval.models import OllamaModel
6
+ from deepeval.metrics import (
7
+ ContextualRelevancyMetric,
8
+ ContextualRecallMetric,
9
+ ContextualPrecisionMetric,
10
+ AnswerRelevancyMetric,
11
+ FaithfulnessMetric
12
+ )
13
+
14
+ # import docx
15
+
16
+
17
+ from deepeval.test_case import LLMTestCase
18
+ from deepeval.dataset import EvaluationDataset, Golden
19
+
20
+ from deepeval import evaluate
21
+ from deepeval.models import OllamaModel
22
+ from transformers import AutoModelForCausalLM, AutoTokenizer
23
+ from Llemma_Finetuned import Llemma_Finetuned
24
+ import ollama
25
+
26
+ #ollama run Hudson/llemma:7b
27
+ #deepeval set-ollama Hudson/llemma:7b
28
+
29
+ if __name__=="__main__":
30
+ # Initialize parser
31
+ parser = argparse.ArgumentParser()
32
+
33
+ # Adding optional argument
34
+ parser.add_argument("-n", "--num", help = "Number of test cases to use")
35
+ parser.add_argument("-s", "--shot", help = "n-shot inference examples")
36
+ parser.add_argument("-d", "--dataset", help = "Path to test case dataset")
37
+
38
+ # Read arguments from command line
39
+ args = parser.parse_args()
40
+ test_case_num = int(args.num)
41
+ num_shot = int(args.shot)
42
+ dataset_name = str(args.dataset)
43
+
44
+
45
+
46
+ # orig
47
+ # model = ollama.pull(model="Hudson/llemma:7b")
48
+ #OllamaModel(model="Hudson/llemma:7b")
49
+
50
+ # finetuned
51
+ # llemma_model = AutoModelForCausalLM.from_pretrained("./train_llemma/merged_models/llemma_lora_merged")
52
+ # tokenizer = AutoTokenizer.from_pretrained("./train_llemma/merged_models/llemma_lora_merged")
53
+ # model = Llemma_Finetuned(model=llemma_model, tokenizer=tokenizer)
54
+
55
+ sorted_rows = []
56
+ with open('dataset_row_stl.txt', 'r') as file:
57
+ sorted_rows = file.readlines()
58
+ # print(sorted_rows)
59
+ sorted_rows = sorted_rows[0:num_shot]
60
+ sorted_rows = [int(x) for x in sorted_rows]
61
+
62
+ print("Read in sorted rows.")
63
+
64
+ examples = "Here are " + str(num_shot) + " examples of math questions (Q) with given answers (A).\n"
65
+ with jsonlines.open("mse_text_img_QA_ds_test.jsonl", mode='r') as fp:
66
+ #with open("mse_text_img_QA_ds_test.jsonl", mode='r') as fp:
67
+ n = 0
68
+ for j, data in enumerate(fp):
69
+ if j + 1 in sorted_rows:
70
+ print("Num shot row " + str(j + 1))
71
+ # data = json.loads(line)
72
+ examples += "Q: " + data["body"] + "\n\n"
73
+ is_accepted = False
74
+ best_score = float('-inf')
75
+ output_text = ""
76
+ for i in range(len(data["answers"])):
77
+ if bool(data["answers"][i]["accepted"]) == True:
78
+ if is_accepted == False:
79
+ is_accepted = True
80
+ best_score = int(data["answers"][i]["score"])
81
+ output_text = data["answers"][i]["body"]
82
+ elif int(data["answers"][i]["score"]) > best_score:
83
+ best_score = int(data["answers"][i]["score"])
84
+ output_text = data["answers"][i]["body"]
85
+ elif int(data["answers"][i]["score"]) > best_score:
86
+ best_score = int(data["answers"][i]["score"])
87
+ output_text = data["answers"][i]["body"]
88
+ examples += "A: " + output_text + "\n\n"
89
+ if n == (num_shot - 1):
90
+ examples += "Provide an answer (A) to the following math question (Q) in a similar manner to the previous example(s) given.\n\nQ: "
91
+ # 26th line
92
+ n += 1
93
+ elif n >= num_shot:
94
+ break
95
+ else:
96
+ continue
97
+
98
+ print("Generated examples for", str(num_shot), "shot.")
99
+
100
+ mse_dataset = []
101
+ with jsonlines.open("mse_text_img_QA_ds_test.jsonl", mode='r') as reader:
102
+
103
+ count = 0
104
+
105
+ curr_row = 0
106
+ for row in reader.iter(type=dict, skip_invalid=True):
107
+ curr_row += 1
108
+ if curr_row == 33 or curr_row == 36 or curr_row == 69 \
109
+ or curr_row == 24 or curr_row == 76 \
110
+ or curr_row == 66 or curr_row == 9 \
111
+ or curr_row == 26 or curr_row == 27 \
112
+ or curr_row == 37 or curr_row == 55 \
113
+ or curr_row == 54 or curr_row == 138 \
114
+ or curr_row == 77 or curr_row == 84 or curr_row == 87 \
115
+ or curr_row == 80 or curr_row == 81 or curr_row == 97 \
116
+ or curr_row == 115 or curr_row == 106:
117
+ print("Skipped row " + str(curr_row))
118
+ continue
119
+ elif curr_row in sorted_rows:
120
+ print("Skipped row " + str(curr_row) + " because it is a shorter example")
121
+ continue
122
+ # question_path = "output/" + row["id"]
123
+ # if count ual<= 0:
124
+ # print(obj)
125
+ if count >= test_case_num:
126
+ break
127
+ else:
128
+ input_text = row["body"]
129
+ # response = ollama.generate(model='Hudson/llemma:7b', prompt=input_text)
130
+ # actual_response = response['response']
131
+ is_accepted = False
132
+ best_score = float('-inf')
133
+ output_text = ""
134
+ # context = []
135
+ next_best_answer = ""
136
+ for i in range(len(row["answers"])):
137
+ if bool(row["answers"][i]["accepted"]) == True:
138
+ if is_accepted == False:
139
+ is_accepted = True
140
+ next_best_answer = output_text
141
+ best_score = int(row["answers"][i]["score"])
142
+ output_text = row["answers"][i]["body"]
143
+ elif int(row["answers"][i]["score"]) > best_score:
144
+ next_best_answer = output_text
145
+ best_score = int(row["answers"][i]["score"])
146
+ output_text = row["answers"][i]["body"]
147
+ # else:
148
+ # context.append(row["answers"][i]["body"])
149
+ elif int(row["answers"][i]["score"]) > best_score:
150
+ next_best_answer = output_text
151
+ best_score = int(row["answers"][i]["score"])
152
+ output_text = row["answers"][i]["body"]
153
+ # else:
154
+ # context.append(row["answers"][i]["body"])
155
+ if next_best_answer == "" or next_best_answer is None:
156
+ next_best_answer = row["title"]
157
+ # test_case_dataset.append(LLMTestCase(input=input_text, actual_output=actual_response, expected_output=output_text, retrieval_context=None))
158
+ # test_case_dataset.append(LLMTestCase(input=input_text, actual_output=model.generate(input_text), expected_output=output_text, retrieval_context=context))
159
+ if num_shot == 0:
160
+ i_text = json.dumps(input_text)
161
+ e_output = json.dumps(output_text)
162
+ r_context = json.dumps(next_best_answer)
163
+ gen_answer = ollama.generate(model="Hudson/llemma:7b", prompt=i_text)
164
+ a_output = json.dumps(gen_answer.response)
165
+ # print("i_text = ", i_text)
166
+ # print("a_output = ", a_output)
167
+ # print("e_output = ", e_output)
168
+ # print("r_context = ", r_context)
169
+ # r_context = gen_answer.context
170
+ # if is_invalid_length(i_text) or is_invalid_length(e_output) or is_invalid_length(r_context):
171
+ # continue
172
+ mse_dataset.append(LLMTestCase(input=i_text, actual_output=a_output, expected_output=e_output, retrieval_context=[r_context]))
173
+ else:
174
+ i_text = json.dumps(examples + input_text)
175
+ e_output = json.dumps(output_text)
176
+ r_context = json.dumps(next_best_answer)
177
+ gen_answer = ollama.generate(model="Hudson/llemma:7b", prompt=i_text)
178
+ a_output = json.dumps(gen_answer.response)
179
+ # r_context = gen_answer.context
180
+ # print("i_text = ", i_text)
181
+ # print("a_output = ", a_output)
182
+ # print("e_output = ", e_output)
183
+ # print("r_context = ", r_context)
184
+ # if is_invalid_length(i_text) or is_invalid_length(e_output) or is_invalid_length(r_context):
185
+ # continue
186
+ mse_dataset.append(LLMTestCase(input=i_text, actual_output=a_output, expected_output=e_output, retrieval_context=[r_context]))
187
+ count = count + 1
188
+ # if curr_row % 1 == 0:
189
+ print("At", str(count), "out of", str(test_case_num), " current row =", str(curr_row))
190
+
191
+ # first_test_case = LLMTestCase(input="...", actual_output="...", context=["..."])
192
+ # second_test_case = LLMTestCase(input="...", actual_output="...", context=["..."])
193
+
194
+
195
+ dataset = EvaluationDataset(test_cases=mse_dataset)
196
+ dataset.save_as(file_type="json", directory="./deepeval-test-dataset", file_name=dataset_name, include_test_cases=True)
197
+
mse_ollama_run.py ADDED
@@ -0,0 +1,268 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import jsonlines
3
+ import json
4
+ # from deepeval.scorer import Scorer
5
+ from deepeval.models import OllamaModel
6
+ from deepeval.metrics import (
7
+ ContextualRelevancyMetric,
8
+ ContextualRecallMetric,
9
+ ContextualPrecisionMetric,
10
+ AnswerRelevancyMetric,
11
+ FaithfulnessMetric
12
+ )
13
+
14
+ # import docx
15
+
16
+
17
+ from deepeval.test_case import LLMTestCase
18
+ from deepeval.dataset import EvaluationDataset, Golden
19
+
20
+ from deepeval import evaluate
21
+ from deepeval.models import OllamaModel
22
+ from transformers import AutoModelForCausalLM, AutoTokenizer
23
+ from Llemma_Finetuned import Llemma_Finetuned
24
+ import ollama
25
+
26
+ #ollama run Hudson/llemma:7b
27
+ #deepeval set-ollama Hudson/llemma:7b
28
+
29
+ def is_invalid_length(text, length=4096):
30
+ if len(text) <= length:
31
+ return False
32
+ else:
33
+ return True
34
+
35
+ if __name__=="__main__":
36
+ # Initialize parser
37
+ parser = argparse.ArgumentParser()
38
+
39
+ # Adding optional argument
40
+ parser.add_argument("-t", "--test", help = "Test to run (ar, cp, crec, crel, f)")
41
+ parser.add_argument("-d", "--dataset", help = "Path to test case dataset")
42
+
43
+ # Read arguments from command line
44
+ args = parser.parse_args()
45
+ test_type = str(args.test)
46
+ test_data = str(args.dataset)
47
+ dataset = EvaluationDataset()
48
+
49
+ # Add as test cases
50
+ dataset.add_test_cases_from_json_file(
51
+ # file_path is the absolute path to you .json file
52
+ file_path=test_data,
53
+ input_key_name="input",
54
+ actual_output_key_name="actual_output",
55
+ expected_output_key_name="expected_output",
56
+ context_key_name="context",
57
+ retrieval_context_key_name="retrieval_context",
58
+ )
59
+ dataset.test_cases = dataset.test_cases[0:100]
60
+
61
+ # orig
62
+ # model = ollama.pull(model="Hudson/llemma:7b")
63
+ #OllamaModel(model="Hudson/llemma:7b")
64
+
65
+ ollama_model = OllamaModel(model="Hudson/llemma:7b")
66
+
67
+ # finetuned
68
+ # llemma_model = AutoModelForCausalLM.from_pretrained("./train_llemma/merged_models/llemma_lora_merged")
69
+ # tokenizer = AutoTokenizer.from_pretrained("./train_llemma/merged_models/llemma_lora_merged")
70
+ # model = Llemma_Finetuned(model=llemma_model, tokenizer=tokenizer)
71
+
72
+ # sorted_rows = []
73
+ # with open('dataset_row_stl.txt', 'r') as file:
74
+ # sorted_rows = file.readlines()
75
+ # # print(sorted_rows)
76
+ # sorted_rows = sorted_rows[0:num_shot]
77
+ # sorted_rows = [int(x) for x in sorted_rows]
78
+
79
+ # print("Read in sorted rows.")
80
+
81
+ # examples = "Here are " + str(num_shot) + " examples of math questions (Q) with given answers (A).\n"
82
+ # with jsonlines.open("mse_text_img_QA_ds_test.jsonl", mode='r') as fp:
83
+ # #with open("mse_text_img_QA_ds_test.jsonl", mode='r') as fp:
84
+ # n = 0
85
+ # for j, data in enumerate(fp):
86
+ # if j + 1 in sorted_rows:
87
+ # print("Num shot row " + str(j + 1))
88
+ # # data = json.loads(line)
89
+ # examples += "Q: " + data["body"] + "\n\n"
90
+ # is_accepted = False
91
+ # best_score = float('-inf')
92
+ # output_text = ""
93
+ # for i in range(len(data["answers"])):
94
+ # if bool(data["answers"][i]["accepted"]) == True:
95
+ # if is_accepted == False:
96
+ # is_accepted = True
97
+ # best_score = int(data["answers"][i]["score"])
98
+ # output_text = data["answers"][i]["body"]
99
+ # elif int(data["answers"][i]["score"]) > best_score:
100
+ # best_score = int(data["answers"][i]["score"])
101
+ # output_text = data["answers"][i]["body"]
102
+ # elif int(data["answers"][i]["score"]) > best_score:
103
+ # best_score = int(data["answers"][i]["score"])
104
+ # output_text = data["answers"][i]["body"]
105
+ # examples += "A: " + output_text + "\n\n"
106
+ # if n == (num_shot - 1):
107
+ # examples += "Provide an answer (A) to the following math question (Q) in a similar manner to the previous example(s) given.\n\nQ: "
108
+ # # 26th line
109
+ # n += 1
110
+ # elif n >= num_shot:
111
+ # break
112
+ # else:
113
+ # continue
114
+
115
+ # print("Generated examples for", str(num_shot), "shot.")
116
+
117
+ # mse_dataset = []
118
+ # with jsonlines.open("mse_text_img_QA_ds_test.jsonl", mode='r') as reader:
119
+
120
+ # count = 0
121
+
122
+ # curr_row = 0
123
+ # for row in reader.iter(type=dict, skip_invalid=True):
124
+ # curr_row += 1
125
+ # if curr_row <= skip_to:
126
+ # continue
127
+ # elif curr_row == 33 or curr_row == 36 or curr_row == 69 \
128
+ # or curr_row == 24 or curr_row == 76 \
129
+ # or curr_row == 66 or curr_row == 9 \
130
+ # or curr_row == 26 or curr_row == 27 \
131
+ # or curr_row == 37 or curr_row == 55 \
132
+ # or curr_row == 54 or curr_row == 138 \
133
+ # or curr_row == 77 or curr_row == 84 or curr_row == 87 \
134
+ # or curr_row == 80 or curr_row == 81 or curr_row == 97 \
135
+ # or curr_row == 115 or curr_row == 106:
136
+ # print("Skipped row " + str(curr_row))
137
+ # continue
138
+ # elif curr_row in sorted_rows:
139
+ # print("Skipped row " + str(curr_row) + " because it is a shorter example")
140
+ # continue
141
+ # # question_path = "output/" + row["id"]
142
+ # # if count ual<= 0:
143
+ # # print(obj)
144
+ # if count >= test_case_num:
145
+ # break
146
+ # else:
147
+ # input_text = row["body"]
148
+ # # response = ollama.generate(model='Hudson/llemma:7b', prompt=input_text)
149
+ # # actual_response = response['response']
150
+ # is_accepted = False
151
+ # best_score = float('-inf')
152
+ # output_text = ""
153
+ # # context = []
154
+ # next_best_answer = ""
155
+ # for i in range(len(row["answers"])):
156
+ # if bool(row["answers"][i]["accepted"]) == True:
157
+ # if is_accepted == False:
158
+ # is_accepted = True
159
+ # next_best_answer = output_text
160
+ # best_score = int(row["answers"][i]["score"])
161
+ # output_text = row["answers"][i]["body"]
162
+ # elif int(row["answers"][i]["score"]) > best_score:
163
+ # next_best_answer = output_text
164
+ # best_score = int(row["answers"][i]["score"])
165
+ # output_text = row["answers"][i]["body"]
166
+ # # else:
167
+ # # context.append(row["answers"][i]["body"])
168
+ # elif int(row["answers"][i]["score"]) > best_score:
169
+ # next_best_answer = output_text
170
+ # best_score = int(row["answers"][i]["score"])
171
+ # output_text = row["answers"][i]["body"]
172
+ # # else:
173
+ # # context.append(row["answers"][i]["body"])
174
+ # if next_best_answer == "" or next_best_answer is None:
175
+ # next_best_answer = row["title"]
176
+ # # test_case_dataset.append(LLMTestCase(input=input_text, actual_output=actual_response, expected_output=output_text, retrieval_context=None))
177
+ # # test_case_dataset.append(LLMTestCase(input=input_text, actual_output=model.generate(input_text), expected_output=output_text, retrieval_context=context))
178
+ # if num_shot == 0:
179
+ # i_text = json.dumps(input_text)
180
+ # e_output = json.dumps(output_text)
181
+ # r_context = json.dumps(next_best_answer)
182
+ # gen_answer = ollama.generate(model="Hudson/llemma:7b", prompt=i_text)
183
+ # a_output = json.dumps(gen_answer.response)
184
+ # # print("i_text = ", i_text)
185
+ # # print("a_output = ", a_output)
186
+ # # print("e_output = ", e_output)
187
+ # # print("r_context = ", r_context)
188
+ # # r_context = gen_answer.context
189
+ # # if is_invalid_length(i_text) or is_invalid_length(e_output) or is_invalid_length(r_context):
190
+ # # continue
191
+ # mse_dataset.append(LLMTestCase(input=i_text, actual_output=a_output, expected_output=e_output, retrieval_context=[r_context]))
192
+ # else:
193
+ # i_text = json.dumps(examples + input_text)
194
+ # e_output = json.dumps(output_text)
195
+ # r_context = json.dumps(next_best_answer)
196
+ # gen_answer = ollama.generate(model="Hudson/llemma:7b", prompt=i_text)
197
+ # a_output = json.dumps(gen_answer.response)
198
+ # # r_context = gen_answer.context
199
+ # # print("i_text = ", i_text)
200
+ # # print("a_output = ", a_output)
201
+ # # print("e_output = ", e_output)
202
+ # # print("r_context = ", r_context)
203
+ # # if is_invalid_length(i_text) or is_invalid_length(e_output) or is_invalid_length(r_context):
204
+ # # continue
205
+ # mse_dataset.append(LLMTestCase(input=i_text, actual_output=a_output, expected_output=e_output, retrieval_context=[r_context]))
206
+ # count = count + 1
207
+ # # if curr_row % 1 == 0:
208
+ # print("At", str(count), "out of", str(test_case_num), " current row =", str(curr_row))
209
+
210
+ # first_test_case = LLMTestCase(input="...", actual_output="...", context=["..."])
211
+ # second_test_case = LLMTestCase(input="...", actual_output="...", context=["..."])
212
+
213
+
214
+ # dataset = EvaluationDataset(test_cases=mse_dataset)
215
+ pass_threshold = 0.7
216
+
217
+ # eval_output = ""
218
+
219
+ if test_type == "ar":
220
+ # answer_relevancy = AnswerRelevancyMetric(model=model, threshold=pass_threshold, async_mode=False)
221
+ answer_relevancy = AnswerRelevancyMetric(model=ollama_model, threshold=pass_threshold, async_mode=False)
222
+ # evaluate(dataset, metrics=[answer_relevancy], out_file=out_path, run_async=True)
223
+ # with open(out_path, "a") as f:
224
+ # # f.write(dataset.evaluate([answer_relevancy]))
225
+ # eval_output = dataset.evaluate([answer_relevancy])
226
+ # evaluate(goldens=dataset.goldens, metrics=[answer_relevancy])
227
+ evaluate(dataset, metrics=[answer_relevancy])
228
+ elif test_type == "cp":
229
+ contextual_precision = ContextualPrecisionMetric(model=ollama_model, threshold=pass_threshold, async_mode=False)
230
+ # evaluate(dataset, metrics=[contextual_precision], out_file=out_path, run_async=True)
231
+ # evaluate(dataset, metrics=[contextual_precision])
232
+ # eval_output = dataset.evaluate([contextual_precision])
233
+ # evaluate(goldens=dataset.goldens, metrics=[contextual_precision])
234
+ evaluate(dataset, metrics=[contextual_precision])
235
+ elif test_type == "crec":
236
+ contextual_recall = ContextualRecallMetric(model=ollama_model, threshold=pass_threshold, async_mode=False)
237
+ # evaluate(dataset, metrics=[contextual_recall], out_file=out_path, run_async=True)
238
+ # evaluate(dataset, metrics=[contextual_recall])
239
+ # eval_output = dataset.evaluate([contextual_recall])
240
+ # evaluate(goldens=dataset.goldens, metrics=[contextual_recall])
241
+ evaluate(dataset, metrics=[contextual_recall])
242
+ else:
243
+ print("Test case (" + test_type + ") not covered")
244
+
245
+ # with open(out_path, "a") as f:
246
+ # f.write(str(eval_output))
247
+
248
+ # Create a document
249
+ # doc = docx.Document()
250
+
251
+ # # Add a paragraph to the document
252
+ # p = doc.add_paragraph()
253
+
254
+ # # Add some formatting to the paragraph
255
+ # p.paragraph_format.line_spacing = 1
256
+ # p.paragraph_format.space_after = 0
257
+
258
+ # # Add a run to the paragraph
259
+ # run = p.add_run(eval_output)
260
+
261
+ # # Add some formatting to the run
262
+ # run.bold = False
263
+ # run.italic = False
264
+ # run.font.name = 'Arial'
265
+ # run.font.size = docx.shared.Pt(12)
266
+
267
+ # # Save the document
268
+ # doc.save(out_path)
mse_ollama_run_ft.py ADDED
@@ -0,0 +1,315 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import jsonlines
3
+ import json
4
+ # from deepeval.scorer import Scorer
5
+ from deepeval.models import OllamaModel
6
+ from deepeval.metrics import (
7
+ ContextualRelevancyMetric,
8
+ ContextualRecallMetric,
9
+ ContextualPrecisionMetric,
10
+ AnswerRelevancyMetric,
11
+ FaithfulnessMetric
12
+ )
13
+
14
+ # import docx
15
+
16
+
17
+ from deepeval.test_case import LLMTestCase
18
+ from deepeval.dataset import EvaluationDataset, Golden
19
+
20
+ from deepeval import evaluate
21
+ from deepeval.models import OllamaModel
22
+ import transformers
23
+ from transformers import AutoModelForCausalLM, AutoTokenizer
24
+ from Llemma_Finetuned import Llemma_Finetuned
25
+ import ollama
26
+ import torch
27
+
28
+ from transformers import AutoModelForCausalLM, AutoTokenizer
29
+
30
+ from deepeval.models import DeepEvalBaseLLM
31
+
32
+
33
+ class CustomLlemma(DeepEvalBaseLLM):
34
+ def __init__(self):
35
+ self.torch_device = "cuda" if torch.cuda.is_available() else "cpu"
36
+
37
+ # finetuned
38
+ model = AutoModelForCausalLM.from_pretrained("./merged_models/llemma_lora_merged").to(self.torch_device)
39
+ tokenizer = AutoTokenizer.from_pretrained("./merged_models/llemma_lora_merged")
40
+
41
+ self.model = model
42
+ self.tokenizer = tokenizer
43
+
44
+ def load_model(self):
45
+ return self.model
46
+
47
+ def generate(self, prompt: str) -> str:
48
+ model = self.load_model()
49
+ pipeline = transformers.pipeline(
50
+ "text-generation",
51
+ model=model,
52
+ tokenizer=self.tokenizer,
53
+ framework="pt",
54
+ device=0,
55
+ max_length=4096,
56
+ eos_token_id=self.tokenizer.eos_token_id,
57
+ pad_token_id=self.tokenizer.eos_token_id,
58
+ )
59
+
60
+ return pipeline(prompt)
61
+
62
+
63
+ # inputs = self.tokenizer(prompt, return_tensors='pt').to(self.torch_device)
64
+ # output = self.model.generate(**inputs)
65
+ # a_output = self.tokenizer.decode(output[0])
66
+
67
+ # return json.dumps(a_output)
68
+
69
+ async def a_generate(self, prompt: str) -> str:
70
+ return self.generate(prompt)
71
+
72
+ def get_model_name(self):
73
+ return "Llemma Fine-tuned"
74
+
75
+ #ollama run Hudson/llemma:7b
76
+ #deepeval set-ollama Hudson/llemma:7b
77
+
78
+ def is_invalid_length(text, length=4096):
79
+ if len(text) <= length:
80
+ return False
81
+ else:
82
+ return True
83
+
84
+ if __name__=="__main__":
85
+ # Initialize parser
86
+ parser = argparse.ArgumentParser()
87
+
88
+ # Adding optional argument
89
+ parser.add_argument("-t", "--test", help = "Test to run (ar, cp, crec, crel, f)")
90
+ parser.add_argument("-d", "--dataset", help = "Path to test case dataset")
91
+
92
+ # Read arguments from command line
93
+ args = parser.parse_args()
94
+ test_type = str(args.test)
95
+ test_data = str(args.dataset)
96
+ dataset = EvaluationDataset()
97
+
98
+ # Add as test cases
99
+ dataset.add_test_cases_from_json_file(
100
+ # file_path is the absolute path to you .json file
101
+ file_path=test_data,
102
+ input_key_name="input",
103
+ actual_output_key_name="actual_output",
104
+ expected_output_key_name="expected_output",
105
+ context_key_name="context",
106
+ retrieval_context_key_name="retrieval_context",
107
+ )
108
+
109
+ # orig
110
+ # model = ollama.pull(model="Hudson/llemma:7b")
111
+ #OllamaModel(model="Hudson/llemma:7b")
112
+ custom_llm = CustomLlemma()
113
+
114
+ # finetuned
115
+ # llemma_model = AutoModelForCausalLM.from_pretrained("./train_llemma/merged_models/llemma_lora_merged")
116
+ # tokenizer = AutoTokenizer.from_pretrained("./train_llemma/merged_models/llemma_lora_merged")
117
+ # model = Llemma_Finetuned(model=llemma_model, tokenizer=tokenizer)
118
+
119
+ # sorted_rows = []
120
+ # with open('dataset_row_stl.txt', 'r') as file:
121
+ # sorted_rows = file.readlines()
122
+ # # print(sorted_rows)
123
+ # sorted_rows = sorted_rows[0:num_shot]
124
+ # sorted_rows = [int(x) for x in sorted_rows]
125
+
126
+ # print("Read in sorted rows.")
127
+
128
+ # examples = "Here are " + str(num_shot) + " examples of math questions (Q) with given answers (A).\n"
129
+ # with jsonlines.open("mse_text_img_QA_ds_test.jsonl", mode='r') as fp:
130
+ # #with open("mse_text_img_QA_ds_test.jsonl", mode='r') as fp:
131
+ # n = 0
132
+ # for j, data in enumerate(fp):
133
+ # if j + 1 in sorted_rows:
134
+ # print("Num shot row " + str(j + 1))
135
+ # # data = json.loads(line)
136
+ # examples += "Q: " + data["body"] + "\n\n"
137
+ # is_accepted = False
138
+ # best_score = float('-inf')
139
+ # output_text = ""
140
+ # for i in range(len(data["answers"])):
141
+ # if bool(data["answers"][i]["accepted"]) == True:
142
+ # if is_accepted == False:
143
+ # is_accepted = True
144
+ # best_score = int(data["answers"][i]["score"])
145
+ # output_text = data["answers"][i]["body"]
146
+ # elif int(data["answers"][i]["score"]) > best_score:
147
+ # best_score = int(data["answers"][i]["score"])
148
+ # output_text = data["answers"][i]["body"]
149
+ # elif int(data["answers"][i]["score"]) > best_score:
150
+ # best_score = int(data["answers"][i]["score"])
151
+ # output_text = data["answers"][i]["body"]
152
+ # examples += "A: " + output_text + "\n\n"
153
+ # if n == (num_shot - 1):
154
+ # examples += "Provide an answer (A) to the following math question (Q) in a similar manner to the previous example(s) given.\n\nQ: "
155
+ # # 26th line
156
+ # n += 1
157
+ # elif n >= num_shot:
158
+ # break
159
+ # else:
160
+ # continue
161
+
162
+ # print("Generated examples for", str(num_shot), "shot.")
163
+
164
+ # mse_dataset = []
165
+ # with jsonlines.open("mse_text_img_QA_ds_test.jsonl", mode='r') as reader:
166
+
167
+ # count = 0
168
+
169
+ # curr_row = 0
170
+ # for row in reader.iter(type=dict, skip_invalid=True):
171
+ # curr_row += 1
172
+ # if curr_row <= skip_to:
173
+ # continue
174
+ # elif curr_row == 33 or curr_row == 36 or curr_row == 69 \
175
+ # or curr_row == 24 or curr_row == 76 \
176
+ # or curr_row == 66 or curr_row == 9 \
177
+ # or curr_row == 26 or curr_row == 27 \
178
+ # or curr_row == 37 or curr_row == 55 \
179
+ # or curr_row == 54 or curr_row == 138 \
180
+ # or curr_row == 77 or curr_row == 84 or curr_row == 87 \
181
+ # or curr_row == 80 or curr_row == 81 or curr_row == 97 \
182
+ # or curr_row == 115 or curr_row == 106:
183
+ # print("Skipped row " + str(curr_row))
184
+ # continue
185
+ # elif curr_row in sorted_rows:
186
+ # print("Skipped row " + str(curr_row) + " because it is a shorter example")
187
+ # continue
188
+ # # question_path = "output/" + row["id"]
189
+ # # if count ual<= 0:
190
+ # # print(obj)
191
+ # if count >= test_case_num:
192
+ # break
193
+ # else:
194
+ # input_text = row["body"]
195
+ # # response = ollama.generate(model='Hudson/llemma:7b', prompt=input_text)
196
+ # # actual_response = response['response']
197
+ # is_accepted = False
198
+ # best_score = float('-inf')
199
+ # output_text = ""
200
+ # # context = []
201
+ # next_best_answer = ""
202
+ # for i in range(len(row["answers"])):
203
+ # if bool(row["answers"][i]["accepted"]) == True:
204
+ # if is_accepted == False:
205
+ # is_accepted = True
206
+ # next_best_answer = output_text
207
+ # best_score = int(row["answers"][i]["score"])
208
+ # output_text = row["answers"][i]["body"]
209
+ # elif int(row["answers"][i]["score"]) > best_score:
210
+ # next_best_answer = output_text
211
+ # best_score = int(row["answers"][i]["score"])
212
+ # output_text = row["answers"][i]["body"]
213
+ # # else:
214
+ # # context.append(row["answers"][i]["body"])
215
+ # elif int(row["answers"][i]["score"]) > best_score:
216
+ # next_best_answer = output_text
217
+ # best_score = int(row["answers"][i]["score"])
218
+ # output_text = row["answers"][i]["body"]
219
+ # # else:
220
+ # # context.append(row["answers"][i]["body"])
221
+ # if next_best_answer == "" or next_best_answer is None:
222
+ # next_best_answer = row["title"]
223
+ # # test_case_dataset.append(LLMTestCase(input=input_text, actual_output=actual_response, expected_output=output_text, retrieval_context=None))
224
+ # # test_case_dataset.append(LLMTestCase(input=input_text, actual_output=model.generate(input_text), expected_output=output_text, retrieval_context=context))
225
+ # if num_shot == 0:
226
+ # i_text = json.dumps(input_text)
227
+ # e_output = json.dumps(output_text)
228
+ # r_context = json.dumps(next_best_answer)
229
+ # gen_answer = ollama.generate(model="Hudson/llemma:7b", prompt=i_text)
230
+ # a_output = json.dumps(gen_answer.response)
231
+ # # print("i_text = ", i_text)
232
+ # # print("a_output = ", a_output)
233
+ # # print("e_output = ", e_output)
234
+ # # print("r_context = ", r_context)
235
+ # # r_context = gen_answer.context
236
+ # # if is_invalid_length(i_text) or is_invalid_length(e_output) or is_invalid_length(r_context):
237
+ # # continue
238
+ # mse_dataset.append(LLMTestCase(input=i_text, actual_output=a_output, expected_output=e_output, retrieval_context=[r_context]))
239
+ # else:
240
+ # i_text = json.dumps(examples + input_text)
241
+ # e_output = json.dumps(output_text)
242
+ # r_context = json.dumps(next_best_answer)
243
+ # gen_answer = ollama.generate(model="Hudson/llemma:7b", prompt=i_text)
244
+ # a_output = json.dumps(gen_answer.response)
245
+ # # r_context = gen_answer.context
246
+ # # print("i_text = ", i_text)
247
+ # # print("a_output = ", a_output)
248
+ # # print("e_output = ", e_output)
249
+ # # print("r_context = ", r_context)
250
+ # # if is_invalid_length(i_text) or is_invalid_length(e_output) or is_invalid_length(r_context):
251
+ # # continue
252
+ # mse_dataset.append(LLMTestCase(input=i_text, actual_output=a_output, expected_output=e_output, retrieval_context=[r_context]))
253
+ # count = count + 1
254
+ # # if curr_row % 1 == 0:
255
+ # print("At", str(count), "out of", str(test_case_num), " current row =", str(curr_row))
256
+
257
+ # first_test_case = LLMTestCase(input="...", actual_output="...", context=["..."])
258
+ # second_test_case = LLMTestCase(input="...", actual_output="...", context=["..."])
259
+
260
+
261
+ # dataset = EvaluationDataset(test_cases=mse_dataset)
262
+ pass_threshold = 0.7
263
+
264
+ # eval_output = ""
265
+
266
+ if test_type == "ar":
267
+ # answer_relevancy = AnswerRelevancyMetric(model=model, threshold=pass_threshold, async_mode=False)
268
+ answer_relevancy = AnswerRelevancyMetric(model=custom_llm, threshold=pass_threshold)
269
+ # evaluate(dataset, metrics=[answer_relevancy], out_file=out_path, run_async=True)
270
+ # with open(out_path, "a") as f:
271
+ # # f.write(dataset.evaluate([answer_relevancy]))
272
+ # eval_output = dataset.evaluate([answer_relevancy])
273
+ # evaluate(goldens=dataset.goldens, metrics=[answer_relevancy])
274
+ evaluate(dataset, metrics=[answer_relevancy])
275
+ elif test_type == "cp":
276
+ contextual_precision = ContextualPrecisionMetric(model=custom_llm, threshold=pass_threshold)
277
+ # evaluate(dataset, metrics=[contextual_precision], out_file=out_path, run_async=True)
278
+ # evaluate(dataset, metrics=[contextual_precision])
279
+ # eval_output = dataset.evaluate([contextual_precision])
280
+ # evaluate(goldens=dataset.goldens, metrics=[contextual_precision])
281
+ evaluate(dataset, metrics=[contextual_precision])
282
+ elif test_type == "crec":
283
+ contextual_recall = ContextualRecallMetric(model=custom_llm, threshold=pass_threshold)
284
+ # evaluate(dataset, metrics=[contextual_recall], out_file=out_path, run_async=True)
285
+ # evaluate(dataset, metrics=[contextual_recall])
286
+ # eval_output = dataset.evaluate([contextual_recall])
287
+ # evaluate(goldens=dataset.goldens, metrics=[contextual_recall])
288
+ evaluate(dataset, metrics=[contextual_recall])
289
+ else:
290
+ print("Test case (" + test_type + ") not covered")
291
+
292
+ # with open(out_path, "a") as f:
293
+ # f.write(str(eval_output))
294
+
295
+ # Create a document
296
+ # doc = docx.Document()
297
+
298
+ # # Add a paragraph to the document
299
+ # p = doc.add_paragraph()
300
+
301
+ # # Add some formatting to the paragraph
302
+ # p.paragraph_format.line_spacing = 1
303
+ # p.paragraph_format.space_after = 0
304
+
305
+ # # Add a run to the paragraph
306
+ # run = p.add_run(eval_output)
307
+
308
+ # # Add some formatting to the run
309
+ # run.bold = False
310
+ # run.italic = False
311
+ # run.font.name = 'Arial'
312
+ # run.font.size = docx.shared.Pt(12)
313
+
314
+ # # Save the document
315
+ # doc.save(out_path)
mse_ollama_timer.py ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import jsonlines
2
+ import json
3
+ # import docx
4
+ # from deepeval.models import OllamaModel
5
+ import ollama
6
+ import time
7
+ #ollama run Hudson/llemma:7b
8
+ #deepeval set-ollama Hudson/llemma:7b
9
+
10
+ if __name__=="__main__":
11
+ # Initialize parser
12
+
13
+ tic = time.perf_counter()
14
+ ollama.generate(model="Hudson/llemma:7b", prompt="I need to determine the radius of convergence of the series $\\sum_{n=1}^\\infty a_nx^n$, where $a_n=a^n+b^n$ and $a,b$ are real numbers.")
15
+ # print(response)
16
+ toc = time.perf_counter()
17
+ print(f"Ran Llemma in {toc - tic:0.4f} seconds")
process_mse_data.sh ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ #SBATCH --time=1:00:00 # walltime. hours:minutes:seconds
4
+ #SBATCH --ntasks=8 # number of processor cores (i.e. tasks)
5
+ #SBATCH --nodes=1 # number of nodes
6
+ #SBATCH --gpus=1
7
+ #SBATCH --mem=80G # 164G memory per CPU core
8
+ #SBATCH --mail-user=aw742@byu.edu # email address
9
+ #SBATCH --mail-type=BEGIN
10
+ #SBATCH --mail-type=END
11
+ #SBATCH --mail-type=FAIL
12
+ #SBATCH --qos=cs
13
+ #SBATCH --partition=cs
14
+
15
+ # some helpful debugging options
16
+ set -e
17
+ set -u
18
+
19
+ # LOAD MODULES, INSERT CODE, AND RUN YOUR PROGRAMS HERE
20
+ # module load python/3.11
21
+
22
+ source ./mse_env/Scripts/activate
23
+
24
+ # json config = "max_samples": 500,
25
+
26
+ # python mse_text_img_process.py
27
+ # python convert_mse.py
28
+
29
+ # pip install jsonlines
30
+ # pip install deepeval
31
+
32
+ NUM_TEST_CASES=100
33
+
34
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --test f --shot 0 --out_file metric_test_orig_100_f.txt
35
+ # echo "Test case faithfulness finished"
36
+
37
+ NUM_SHOT=0
38
+
39
+ # set DEEPEVAL_RESULTS_FOLDER=.\data
40
+
41
+ python mse_ollama_timer.py
42
+ echo "Test time calculated"
43
+
44
+ # deepeval set-local-model --model-name Hudson/llemma:7b
45
+ # ollama pull Hudson/llemma:7b
46
+ # deepeval set-ollama Hudson/llemma:7b
47
+
48
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_0_shot_100_ar"
49
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test ar --shot $NUM_SHOT #--out_file metric_test_0_shot_100_ar.txt
50
+ # echo "Test case answer relevancy finished"
51
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_0_shot_100_crec"
52
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test crec --shot $NUM_SHOT #--out_file metric_test_0_shot_100_crec.txt
53
+ # echo "Test case contexual recall finished"
54
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_0_shot_100_cp"
55
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test cp --shot $NUM_SHOT #--out_file metric_test_0_shot_100_cp.txt
56
+ # echo "Test case contextual precision finished"
57
+
58
+ NUM_SHOT=1
59
+
60
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_1_shot_100_ar"
61
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test ar --shot $NUM_SHOT #--out_file metric_test_1_shot_100_ar.txt
62
+ # echo "Test case answer relevancy finished"
63
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_1_shot_100_crec"
64
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test crec --shot $NUM_SHOT #--out_file metric_test_1_shot_100_crec.txt
65
+ # echo "Test case contexual recall finished"
66
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_1_shot_100_cp"
67
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test cp --shot $NUM_SHOT #--out_file metric_test_1_shot_100_cp.txt
68
+ # echo "Test case contextual precision finished"
69
+
70
+
71
+
72
+ NUM_SHOT=5
73
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_5_shot_100_ar"
74
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test ar --shot $NUM_SHOT #--out_file metric_test_5_shot_100_ar.txt
75
+ # echo "Test case answer relevancy finished"
76
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_5_shot_100_crec"
77
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test crec --shot $NUM_SHOT #--out_file metric_test_5_shot_100_crec.txt
78
+ # echo "Test case contexual recall finished"
79
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_5_shot_100_cp"
80
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test cp --shot $NUM_SHOT #--out_file metric_test_5_shot_100_cp.txt
81
+ # echo "Test case contextual precision finished"
82
+
83
+ # python mse_ollama_run.py --num 25 --begin 0 --test cp --shot $NUM_SHOT --out_file metric_test_5_shot_25_cp.txt
84
+ # echo "Test case contextual precision finished"
85
+
86
+ # python mse_ollama_run.py --num 25 --begin 25 --test cp --shot $NUM_SHOT --out_file metric_test_5_shot_25_b25_cp.txt
87
+ # echo "Test case contextual precision finished (start 25)"
88
+ # python mse_ollama_run.py --num 25 --begin 50 --test cp --shot $NUM_SHOT --out_file metric_test_5_shot_25_b50_cp.txt
89
+ # echo "Test case contextual precision finished (start 50)"
90
+ # python mse_ollama_run.py --num 25 --begin 75 --test cp --shot $NUM_SHOT --out_file metric_test_5_shot_25_b75_cp.txt
91
+ # echo "Test case contextual precision finished (start 75)"
92
+
93
+
94
+ NUM_SHOT=10
95
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_10_shot_100_ar"
96
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test ar --shot $NUM_SHOT -out_file metric_test_10_shot_100_ar.txt
97
+ # echo "Test case answer relevancy finished"
98
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_10_shot_100_crec"
99
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test crec --shot $NUM_SHOT -out_file metric_test_10_shot_100_crec.txt
100
+ # echo "Test case contexual recall finished"
101
+ # export DEEPEVAL_RESULTS_FOLDER="./metric_test_10_shot_100_cp"
102
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test cp --shot $NUM_SHOT -out_file metric_test_10_shot_100_cp.txt
103
+ # echo "Test case contextual precision finished"
104
+
105
+ # finetuned
106
+ NUM_SHOT=0
107
+ # export DEEPEVAL_RESULTS_FOLDER="metric_test_ft_100_ar"
108
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test ar --shot $NUM_SHOT #> metric_test_ft_100_ar.txt
109
+ # echo "Test case answer relevancy finished"
110
+ # export DEEPEVAL_RESULTS_FOLDER="metric_test_ft_100_crec"
111
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test crec --shot $NUM_SHOT #> metric_test_ft_100_crec.txt
112
+ # echo "Test case contexual recall finished"
113
+ # export DEEPEVAL_RESULTS_FOLDER="metric_test_ft_100_cp"
114
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test cp --shot $NUM_SHOT > metric_test_ft_100_cp.txt
115
+ # echo "Test case contextual precision finished"
116
+
117
+
118
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --test crel --out_file metric_test_orig_100_crel.txt
119
+ # echo "Test case contextual relevancy finished"
120
+
121
+
122
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --test f --out_file metric_test_orig_100_f.txt
123
+ # echo "Test case faithfulness finished"
124
+
125
+ # python mse_jsonl_resize.py
126
+
127
+ # python finetune.py
128
+
129
+ # echo "Original Llemma Model"
130
+ # echo "Processing 0 shot 100 test cases"
131
+ # CUDA_VISIBLE_DEVICES=0 python mse_deepeval_dataset.py --num 100 --shot 0 --dataset mse_llemma_orig_100_case_0_shot
132
+ # echo "Processing 1 shot 100 test cases"
133
+ # CUDA_VISIBLE_DEVICES=0 python mse_deepeval_dataset.py --num 100 --shot 1 --dataset mse_llemma_orig_100_case_1_shot
134
+ # echo "Processing 5 shot 100 test cases"
135
+ # CUDA_VISIBLE_DEVICES=0 python mse_deepeval_dataset.py --num 100 --shot 5 --dataset mse_llemma_orig_100_case_5_shot
136
+ # echo "Processing 10 shot 100 test cases"
137
+ # CUDA_VISIBLE_DEVICES=0 python mse_deepeval_dataset.py --num 100 --shot 10 --dataset mse_llemma_orig_100_case_10_shot
process_mse_data_2.sh ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ #SBATCH --time=1:00:00 # walltime. hours:minutes:seconds
4
+ #SBATCH --ntasks=8 # number of processor cores (i.e. tasks)
5
+ #SBATCH --nodes=1 # number of nodes
6
+ #SBATCH --gpus=1
7
+ #SBATCH --mem=80G # 164G memory per CPU core
8
+ #SBATCH --mail-user=aw742@byu.edu # email address
9
+ #SBATCH --mail-type=BEGIN
10
+ #SBATCH --mail-type=END
11
+ #SBATCH --mail-type=FAIL
12
+ #SBATCH --qos=cs
13
+ #SBATCH --partition=cs
14
+
15
+ # some helpful debugging options
16
+ set -e
17
+ set -u
18
+
19
+ # LOAD MODULES, INSERT CODE, AND RUN YOUR PROGRAMS HERE
20
+ # module load python/3.11
21
+
22
+ source ./mse_env/Scripts/activate
23
+
24
+ # json config = "max_samples": 500,
25
+
26
+ # python mse_text_img_process.py
27
+ # python convert_mse.py
28
+
29
+ # pip install jsonlines
30
+ # pip install deepeval
31
+
32
+ NUM_TEST_CASES=100
33
+
34
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --test f --shot 0 --out_file metric_test_orig_100_f.txt
35
+ # echo "Test case faithfulness finished"
36
+
37
+ NUM_SHOT=0
38
+
39
+ # set DEEPEVAL_RESULTS_FOLDER=.\data
40
+
41
+ # deepeval set-local-model --model-name Hudson/llemma:7b
42
+ # ollama pull Hudson/llemma:7b
43
+ # deepeval set-ollama Hudson/llemma:7b
44
+
45
+ # python mse_ollama_run.py --test ar --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_0_shot.json > metric_test_0_shot_100_ar.txt
46
+ # echo "Test case answer relevancy finished"
47
+ # python mse_ollama_run.py --test crec --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_0_shot.json #> metric_test_0_shot_100_crec.txt
48
+ # echo "Test case contexual recall finished"
49
+ # python mse_ollama_run.py --test cp --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_0_shot.json > metric_test_0_shot_100_cp.txt
50
+ # echo "Test case contextual precision finished"
51
+
52
+ NUM_SHOT=1
53
+
54
+ # python mse_ollama_run.py --test ar --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_1_shot.json > metric_test_1_shot_100_ar.txt
55
+ # echo "Test case answer relevancy finished"
56
+ # python mse_ollama_run.py --test crec --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_1_shot.json #> metric_test_1_shot_100_crec.txt
57
+ # echo "Test case contexual recall finished"
58
+ # python mse_ollama_run.py --test cp --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_1_shot.json #> metric_test_1_shot_100_cp.txt
59
+ # echo "Test case contextual precision finished"
60
+
61
+
62
+
63
+ NUM_SHOT=5
64
+ # python mse_ollama_run.py --test ar --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_5_shot.json > metric_test_5_shot_100_ar.txt
65
+ # echo "Test case answer relevancy finished"
66
+ # python mse_ollama_run.py --test crec --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_5_shot.json #> metric_test_5_shot_100_crec.txt
67
+ # echo "Test case contexual recall finished"
68
+ # python mse_ollama_run.py --test cp --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_5_shot.json #> metric_test_5_shot_100_cp.txt
69
+ # echo "Test case contextual precision finished"
70
+
71
+ # # python mse_ollama_run.py --num 25 --begin 0 --test cp --shot $NUM_SHOT --out_file metric_test_5_shot_25_cp.txt
72
+ # # echo "Test case contextual precision finished"
73
+
74
+ # # python mse_ollama_run.py --num 25 --begin 25 --test cp --shot $NUM_SHOT --out_file metric_test_5_shot_25_b25_cp.txt
75
+ # # echo "Test case contextual precision finished (start 25)"
76
+ # # python mse_ollama_run.py --num 25 --begin 50 --test cp --shot $NUM_SHOT --out_file metric_test_5_shot_25_b50_cp.txt
77
+ # # echo "Test case contextual precision finished (start 50)"
78
+ # # python mse_ollama_run.py --num 25 --begin 75 --test cp --shot $NUM_SHOT --out_file metric_test_5_shot_25_b75_cp.txt
79
+ # # echo "Test case contextual precision finished (start 75)"
80
+
81
+
82
+ # NUM_SHOT=10
83
+ # python mse_ollama_run.py --test ar --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_10_shot.json > metric_test_10_shot_100_ar.txt
84
+ # echo "Test case answer relevancy finished"
85
+ # python mse_ollama_run.py --test crec --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_10_shot.json #> metric_test_10_shot_100_crec.txt
86
+ # echo "Test case contexual recall finished"
87
+ # python mse_ollama_run.py --test cp --dataset ./deepeval-test-dataset/mse_llemma_orig_100_case_10_shot.json #> metric_test_10_shot_100_cp.txt
88
+ # echo "Test case contextual precision finished"
89
+
90
+ # finetuned
91
+ NUM_SHOT=0
92
+ # export DEEPEVAL_RESULTS_FOLDER="metric_test_ft_100_ar"
93
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test ar --shot $NUM_SHOT --out_file metric_test_ft_100_ar.docx
94
+ # echo "Test case answer relevancy finished"
95
+ # export DEEPEVAL_RESULTS_FOLDER="metric_test_ft_100_crec"
96
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test crec --shot $NUM_SHOT --out_file metric_test_ft_100_crec.docx
97
+ # echo "Test case contexual recall finished"
98
+ # export DEEPEVAL_RESULTS_FOLDER="metric_test_ft_100_cp"
99
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --begin 0 --test cp --shot $NUM_SHOT --out_file metric_test_ft_100_cp.docx
100
+ # echo "Test case contextual precision finished"
101
+
102
+ python mse_ollama_run_ft.py --test ar --dataset ./deepeval-test-dataset/mse_llemma_ft_100_case_0_shot.json #> metric_test_ft_100_ar.txt
103
+ echo "Test case answer relevancy finished"
104
+ python mse_ollama_run_ft.py --test crec --dataset ./deepeval-test-dataset/mse_llemma_ft_100_case_0_shot.json > metric_test_ft_100_crec.txt
105
+ echo "Test case contexual recall finished"
106
+ python mse_ollama_run_ft.py --test cp --dataset ./deepeval-test-dataset/mse_llemma_ft_100_case_0_shot.json > metric_test_ft_100_cp.txt
107
+ echo "Test case contextual precision finished"
108
+
109
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --test crel --out_file metric_test_orig_100_crel.txt
110
+ # echo "Test case contextual relevancy finished"
111
+
112
+
113
+ # python mse_ollama_run.py --num $NUM_TEST_CASES --test f --out_file metric_test_orig_100_f.txt
114
+ # echo "Test case faithfulness finished"
115
+
116
+ # python mse_jsonl_resize.py
117
+
118
+ # python finetune.py
requirements.txt ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ numpy
2
+ matplotlib
3
+ scipy
4
+ sentencepiece
5
+ protobuf
6
+ torch
7
+ gradio
8
+ torchvision
9
+ opencv-python-headless
10
+ tensorboardX
11
+ transformers
12
+ datasets
13
+ pylatex
14
+ pnglatex
15
+ zstandard
16
+ jsonlines
17
+ pyramid==1.5
18
+ deepeval
19
+ ollama
20
+ jsonlines
run.sh ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ #SBATCH --time=1:00:00 # walltime. hours:minutes:seconds
4
+ #SBATCH --ntasks=8 # number of processor cores (i.e. tasks)
5
+ #SBATCH --nodes=1 # number of nodes
6
+ #SBATCH --gpus=1
7
+ #SBATCH --mem=80G # 164G memory per CPU core
8
+ #SBATCH --mail-user=aw742@byu.edu # email address
9
+ #SBATCH --mail-type=BEGIN
10
+ #SBATCH --mail-type=END
11
+ #SBATCH --mail-type=FAIL
12
+ #SBATCH --qos=cs
13
+ #SBATCH --partition=cs
14
+
15
+ # some helpful debugging options
16
+ set -e
17
+ set -u
18
+
19
+ # LOAD MODULES, INSERT CODE, AND RUN YOUR PROGRAMS HERE
20
+ # module load python/3.11
21
+
22
+ # pip install virtualenv
23
+
24
+ # python -m venv mse_env
25
+
26
+ source ./mse_env/Scripts/activate
27
+
28
+ pip uninstall transformers
29
+ pip uninstall torch torchvision torchaudio
30
+
31
+ cd math-evaluation-harness
32
+ pip install -r requirements.txt
33
+ cd ..
34
+
35
+ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
36
+ # pip install tqdm
37
+ # pip install accelerate
timer_test_llemma.txt ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Ran Llemma in 3.5200 seconds
2
+ Test time calculated
3
+
4
+ Ran Llemma in 6.0093 seconds
5
+ Test time calculated
6
+
7
+ Ran Llemma in 2.3846 seconds
8
+ Test time calculated
9
+
10
+ Ran Llemma in 2.6123 seconds
11
+ Test time calculated
12
+
13
+ Ran Llemma in 2.0650 seconds
14
+ Test time calculated
15
+
16
+ Avg:
17
+ 3.32004 seconds