ZTWHHH commited on
Commit
c021a3a
·
verified ·
1 Parent(s): 7e27a1f

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. evalkit_eagle/lib/python3.10/site-packages/openai-1.59.7.dist-info/RECORD +808 -0
  3. evalkit_eagle/lib/python3.10/site-packages/openai-1.59.7.dist-info/REQUESTED +0 -0
  4. evalkit_eagle/lib/python3.10/site-packages/openai-1.59.7.dist-info/entry_points.txt +2 -0
  5. evalkit_eagle/lib/python3.10/site-packages/openai-1.59.7.dist-info/licenses/LICENSE +201 -0
  6. evalkit_eagle/lib/python3.10/site-packages/regex/_regex.cpython-310-x86_64-linux-gnu.so +3 -0
  7. evalkit_eagle/lib/python3.10/site-packages/sacrebleu/__init__.py +66 -0
  8. evalkit_eagle/lib/python3.10/site-packages/sacrebleu/__main__.py +27 -0
  9. evalkit_eagle/lib/python3.10/site-packages/sacrebleu/compat.py +205 -0
  10. evalkit_eagle/lib/python3.10/site-packages/sacrebleu/sacrebleu.py +576 -0
  11. evalkit_eagle/lib/python3.10/site-packages/sacrebleu/significance.py +435 -0
  12. evalkit_eagle/lib/python3.10/site-packages/sacrebleu/utils.py +639 -0
  13. evalkit_eagle/lib/python3.10/site-packages/sacrebleu/version.py +16 -0
  14. janus/share/terminfo/q/qansi-t +0 -0
  15. janus/share/terminfo/q/qnx +0 -0
  16. janus/share/terminfo/q/qnxt +0 -0
  17. janus/share/terminfo/q/qnxt4 +0 -0
  18. janus/share/terminfo/q/qvt108 +0 -0
  19. janus/share/terminfo/q/qvt119+-w +0 -0
  20. janus/share/terminfo/q/qvt119-w +0 -0
  21. janus/share/terminfo/q/qvt119p +0 -0
  22. janus/share/terminfo/q/qvt203+ +0 -0
  23. janus/share/terminfo/x/x1700 +0 -0
  24. janus/share/terminfo/x/xerox820 +0 -0
  25. janus/share/terminfo/x/xnuppc+112x37 +0 -0
  26. janus/share/terminfo/x/xnuppc+144x48 +0 -0
  27. janus/share/terminfo/x/xnuppc+f2 +0 -0
  28. janus/share/terminfo/x/xnuppc-100x37-m +0 -0
  29. janus/share/terminfo/x/xnuppc-112x37 +0 -0
  30. janus/share/terminfo/x/xnuppc-128x40 +0 -0
  31. janus/share/terminfo/x/xnuppc-128x48-m +0 -0
  32. janus/share/terminfo/x/xnuppc-144x48-m +0 -0
  33. janus/share/terminfo/x/xnuppc-160x64 +0 -0
  34. janus/share/terminfo/x/xnuppc-160x64-m +0 -0
  35. janus/share/terminfo/x/xnuppc-80x30 +0 -0
  36. janus/share/terminfo/x/xnuppc-90x30-m +0 -0
  37. janus/share/terminfo/x/xnuppc-b +0 -0
  38. janus/share/terminfo/x/xnuppc-f +0 -0
  39. janus/share/terminfo/x/xnuppc-m +0 -0
  40. janus/share/terminfo/x/xnuppc-m-b +0 -0
  41. janus/share/terminfo/x/xterm+88color2 +0 -0
  42. janus/share/terminfo/x/xterm+alt1049 +0 -0
  43. janus/share/terminfo/x/xterm+alt47 +0 -0
  44. janus/share/terminfo/x/xterm+app +0 -0
  45. janus/share/terminfo/x/xterm+direct +0 -0
  46. janus/share/terminfo/x/xterm+direct16 +0 -0
  47. janus/share/terminfo/x/xterm+direct256 +0 -0
  48. janus/share/terminfo/x/xterm+keypad +0 -0
  49. janus/share/terminfo/x/xterm+meta +0 -0
  50. janus/share/terminfo/x/xterm+nofkeys +0 -0
.gitattributes CHANGED
@@ -803,3 +803,4 @@ evalkit_eagle/lib/python3.10/site-packages/pandas/core/indexes/__pycache__/base.
803
  evalkit_eagle/lib/python3.10/site-packages/pandas/io/__pycache__/stata.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
804
  evalkit_eagle/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so filter=lfs diff=lfs merge=lfs -text
805
  evalkit_eagle/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda124.so filter=lfs diff=lfs merge=lfs -text
 
 
803
  evalkit_eagle/lib/python3.10/site-packages/pandas/io/__pycache__/stata.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
804
  evalkit_eagle/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so filter=lfs diff=lfs merge=lfs -text
805
  evalkit_eagle/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda124.so filter=lfs diff=lfs merge=lfs -text
806
+ evalkit_eagle/lib/python3.10/site-packages/regex/_regex.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
evalkit_eagle/lib/python3.10/site-packages/openai-1.59.7.dist-info/RECORD ADDED
@@ -0,0 +1,808 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ../../../bin/openai,sha256=BPJMcG5Ul8vWwBvgskLtldG0MRWeiGZZSOaSfIRGf5o,228
2
+ openai-1.59.7.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
3
+ openai-1.59.7.dist-info/METADATA,sha256=6HlD_z7wNgkm6HpVwwCLjKH6ZN0pGliuzqGYPbkMm8E,27223
4
+ openai-1.59.7.dist-info/RECORD,,
5
+ openai-1.59.7.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
6
+ openai-1.59.7.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
7
+ openai-1.59.7.dist-info/entry_points.txt,sha256=kAYhQEmziJwsKs5raYAIOvJ2LWmbz5dulEXOzsY71ro,43
8
+ openai-1.59.7.dist-info/licenses/LICENSE,sha256=1xHtN7sZrnJJr40JO4_G6nWP01VLkqxhUAwa08wOP7k,11336
9
+ openai/__init__.py,sha256=UZfk6nnPAGguY3XX7QQfqa4kZjzvFEGp-TUyxrBcTlI,10296
10
+ openai/__main__.py,sha256=bYt9eEaoRQWdejEHFD8REx9jxVEdZptECFsV7F49Ink,30
11
+ openai/__pycache__/__init__.cpython-310.pyc,,
12
+ openai/__pycache__/__main__.cpython-310.pyc,,
13
+ openai/__pycache__/_base_client.cpython-310.pyc,,
14
+ openai/__pycache__/_client.cpython-310.pyc,,
15
+ openai/__pycache__/_compat.cpython-310.pyc,,
16
+ openai/__pycache__/_constants.cpython-310.pyc,,
17
+ openai/__pycache__/_exceptions.cpython-310.pyc,,
18
+ openai/__pycache__/_files.cpython-310.pyc,,
19
+ openai/__pycache__/_legacy_response.cpython-310.pyc,,
20
+ openai/__pycache__/_models.cpython-310.pyc,,
21
+ openai/__pycache__/_module_client.cpython-310.pyc,,
22
+ openai/__pycache__/_qs.cpython-310.pyc,,
23
+ openai/__pycache__/_resource.cpython-310.pyc,,
24
+ openai/__pycache__/_response.cpython-310.pyc,,
25
+ openai/__pycache__/_streaming.cpython-310.pyc,,
26
+ openai/__pycache__/_types.cpython-310.pyc,,
27
+ openai/__pycache__/_version.cpython-310.pyc,,
28
+ openai/__pycache__/pagination.cpython-310.pyc,,
29
+ openai/__pycache__/version.cpython-310.pyc,,
30
+ openai/_base_client.py,sha256=dp8TJR8ZBuS0RbjnNKVkZC--tbstwz33Q_P_UB7dKCE,69238
31
+ openai/_client.py,sha256=FJRGkrdpHAFV2TOs04tO5uyKCA-cudlk4BlvCX3KI3Q,23355
32
+ openai/_compat.py,sha256=Mtzi28qOK99ZBPcGcQqdjoUFk2MzzpqjaafjuwQ4NO0,6982
33
+ openai/_constants.py,sha256=L1pfEhuz_wM2w2_U9P_9JZzTbrN4pbLo207l96rtKcQ,469
34
+ openai/_exceptions.py,sha256=2BEuXwqce9z7X6lWLLXRqg1vOay_q-OdLz9lcj6Pluw,4798
35
+ openai/_extras/__init__.py,sha256=LZbJLZ7aFHRcI7uiY4-wFQTdMp-BF6FER1QMhKVFkWk,107
36
+ openai/_extras/__pycache__/__init__.cpython-310.pyc,,
37
+ openai/_extras/__pycache__/_common.cpython-310.pyc,,
38
+ openai/_extras/__pycache__/numpy_proxy.cpython-310.pyc,,
39
+ openai/_extras/__pycache__/pandas_proxy.cpython-310.pyc,,
40
+ openai/_extras/_common.py,sha256=NWWtgbdJsO3hQGQxaXGfVk0LjeIE5AFZ8VS_795hhMc,364
41
+ openai/_extras/numpy_proxy.py,sha256=hwZXa_JBAPD5taRhor1tGxK26g5IaK52JclQDl-dky0,799
42
+ openai/_extras/pandas_proxy.py,sha256=NCEt1Dqwc_0H85YdsWPDE3lPDJtYnBT8G-gJE_BCeEc,637
43
+ openai/_files.py,sha256=WEf6hxJN1u3pVkdnPCpinhxCUnOV2olt4J6vLoJ_k48,3616
44
+ openai/_legacy_response.py,sha256=YBL2OTX7W139lVpcVHnNTsHRPNJxWHBAw6ZZHqnL2fs,16046
45
+ openai/_models.py,sha256=9AQDXMPMGn0BM-MjcKL6AZYXItM2OJPgdhgZPJiHpUA,30413
46
+ openai/_module_client.py,sha256=gF_2bbdosIwUt29sQgrQRJOgNREvXF-IDxe4XKGhHjY,2523
47
+ openai/_qs.py,sha256=AOkSz4rHtK4YI3ZU_kzea-zpwBUgEY8WniGmTPyEimc,4846
48
+ openai/_resource.py,sha256=IQihFzFLhGOiGSlT2dO1ESWSTg2XypgbtAldtGdTOqU,1100
49
+ openai/_response.py,sha256=Juwnj0AMWnHc8HDjtdcQQpMIDyX170hzZPXaAK1e9Qw,29387
50
+ openai/_streaming.py,sha256=t1UZrg53fVJB5Rs6k2sT9PBbvjp-IGrQzUq_5nlxKG4,13102
51
+ openai/_types.py,sha256=GxKqy9_2_AUqbaRROzqhCJ47a7c-q_T6Bu8kV9a2qhA,6242
52
+ openai/_utils/__init__.py,sha256=WnJrKMH-HJifY1H9sSTocSjuVSm4s2W_2QnIm3-wxZI,2222
53
+ openai/_utils/__pycache__/__init__.cpython-310.pyc,,
54
+ openai/_utils/__pycache__/_logs.cpython-310.pyc,,
55
+ openai/_utils/__pycache__/_proxy.cpython-310.pyc,,
56
+ openai/_utils/__pycache__/_reflection.cpython-310.pyc,,
57
+ openai/_utils/__pycache__/_streams.cpython-310.pyc,,
58
+ openai/_utils/__pycache__/_sync.cpython-310.pyc,,
59
+ openai/_utils/__pycache__/_transform.cpython-310.pyc,,
60
+ openai/_utils/__pycache__/_typing.cpython-310.pyc,,
61
+ openai/_utils/__pycache__/_utils.cpython-310.pyc,,
62
+ openai/_utils/_logs.py,sha256=IC5iwPflwelNpJEpWsvK3up-pol5hR8k_VL9fSukk_Y,1351
63
+ openai/_utils/_proxy.py,sha256=z3zsateHtb0EARTWKk8QZNHfPkqJbqwd1lM993LBwGE,1902
64
+ openai/_utils/_reflection.py,sha256=aTXm-W0Kww4PJo5LPkUnQ92N-2UvrK1-D67cJVBlIgw,1426
65
+ openai/_utils/_streams.py,sha256=SMC90diFFecpEg_zgDRVbdR3hSEIgVVij4taD-noMLM,289
66
+ openai/_utils/_sync.py,sha256=03JeD-UR_e2O8dJEtD-v4zcyhlEpFkrcH8bgrSJMrxI,2437
67
+ openai/_utils/_transform.py,sha256=Dkkyr7OveGmOolepcvXmVJWE3kqim4b0nM0h7yWbgeY,13468
68
+ openai/_utils/_typing.py,sha256=nTJz0jcrQbEgxwy4TtAkNxuU0QHHlmc6mQtA6vIR8tg,4501
69
+ openai/_utils/_utils.py,sha256=MiRKO6s2cFkNzeBUwBc7x1MQiH_3s2-uG1WYySqwveg,12419
70
+ openai/_version.py,sha256=7awlvvOt0N1yQ0bQTOoi1bid6H4qFs_SXBZJdn7IFYA,159
71
+ openai/cli/__init__.py,sha256=soGgtqyomgddl92H0KJRqHqGuaXIaghq86qkzLuVp7U,31
72
+ openai/cli/__pycache__/__init__.cpython-310.pyc,,
73
+ openai/cli/__pycache__/_cli.cpython-310.pyc,,
74
+ openai/cli/__pycache__/_errors.cpython-310.pyc,,
75
+ openai/cli/__pycache__/_models.cpython-310.pyc,,
76
+ openai/cli/__pycache__/_progress.cpython-310.pyc,,
77
+ openai/cli/__pycache__/_utils.cpython-310.pyc,,
78
+ openai/cli/_api/__init__.py,sha256=cj92MZq-9_1PQM8A4TQVsqKn5mcTDAGxHllJ0UvJOPE,58
79
+ openai/cli/_api/__pycache__/__init__.cpython-310.pyc,,
80
+ openai/cli/_api/__pycache__/_main.cpython-310.pyc,,
81
+ openai/cli/_api/__pycache__/audio.cpython-310.pyc,,
82
+ openai/cli/_api/__pycache__/completions.cpython-310.pyc,,
83
+ openai/cli/_api/__pycache__/files.cpython-310.pyc,,
84
+ openai/cli/_api/__pycache__/image.cpython-310.pyc,,
85
+ openai/cli/_api/__pycache__/models.cpython-310.pyc,,
86
+ openai/cli/_api/_main.py,sha256=5yyfLURqCEaAN8B61gHaqVAaYgtyb9Xq0ncQ3P2BAh0,451
87
+ openai/cli/_api/audio.py,sha256=IPbABMwryQ0CQTF4gi6VS3hJi6qFjoyj6IDV2ZoPT6A,3787
88
+ openai/cli/_api/chat/__init__.py,sha256=MhFUQH9F6QCtbPMlbsU_DWTd7wc5DSCZ7Wy3FBGVij0,300
89
+ openai/cli/_api/chat/__pycache__/__init__.cpython-310.pyc,,
90
+ openai/cli/_api/chat/__pycache__/completions.cpython-310.pyc,,
91
+ openai/cli/_api/chat/completions.py,sha256=9Ztetyz7rm0gP5SOPWEcpzFJnJKuIEQit626vOq42bE,5363
92
+ openai/cli/_api/completions.py,sha256=ysOmnbXpFz3VB5N_5USPdObiYew62vEn6rMtNFwTJGQ,6412
93
+ openai/cli/_api/files.py,sha256=6nKXFnsC2QE0bGnVUAG7BTLSu6K1_MhPE0ZJACmzgRY,2345
94
+ openai/cli/_api/image.py,sha256=ovBExdn8oUK9ImOpsPafesfAlmcftLP2p7d37hcUtKU,5062
95
+ openai/cli/_api/models.py,sha256=pGmIGZToj3raGGpKvPSq_EVUR-dqg4Vi0PNfZH98D2E,1295
96
+ openai/cli/_cli.py,sha256=o6zWCnq84u-DIGZuR9YoOUxTGTpx-oCU5mgAKDi555c,6779
97
+ openai/cli/_errors.py,sha256=nejlu1HnOyAIr2n7uqpFtWn8XclWj_9N8FwgfT3BPK8,471
98
+ openai/cli/_models.py,sha256=tgsldjG216KpwgAZ5pS0sV02FQvONDJU2ElA4kCCiIU,491
99
+ openai/cli/_progress.py,sha256=aMLssU9jh-LoqRYH3608jNos7r6vZKnHTRlHxFznzv4,1406
100
+ openai/cli/_tools/__init__.py,sha256=cj92MZq-9_1PQM8A4TQVsqKn5mcTDAGxHllJ0UvJOPE,58
101
+ openai/cli/_tools/__pycache__/__init__.cpython-310.pyc,,
102
+ openai/cli/_tools/__pycache__/_main.cpython-310.pyc,,
103
+ openai/cli/_tools/__pycache__/fine_tunes.cpython-310.pyc,,
104
+ openai/cli/_tools/__pycache__/migrate.cpython-310.pyc,,
105
+ openai/cli/_tools/_main.py,sha256=pakjEXHRHqYlTml-RxV7fNrRtRXzmZBinoPi1AJipFY,467
106
+ openai/cli/_tools/fine_tunes.py,sha256=RQgYMzifk6S7Y1I1K6huqco2QxmXa7gVUlHl6SrKTSU,1543
107
+ openai/cli/_tools/migrate.py,sha256=o-iomzhtC6N6X5H5GDlgQ_QOaIovE2YA9oHc_tIAUj8,4497
108
+ openai/cli/_utils.py,sha256=oiTc9MnxQh_zxAZ1OIHPkoDpCll0NF9ZgkdFHz4T-Bs,848
109
+ openai/lib/.keep,sha256=wuNrz-5SXo3jJaJOJgz4vFHM41YH_g20F5cRQo0vLes,224
110
+ openai/lib/__init__.py,sha256=BMTfMnlbugMgDA1STDIAlx4bI4t4l_8bQmJxd0th0n8,126
111
+ openai/lib/__pycache__/__init__.cpython-310.pyc,,
112
+ openai/lib/__pycache__/_old_api.cpython-310.pyc,,
113
+ openai/lib/__pycache__/_pydantic.cpython-310.pyc,,
114
+ openai/lib/__pycache__/_tools.cpython-310.pyc,,
115
+ openai/lib/__pycache__/_validators.cpython-310.pyc,,
116
+ openai/lib/__pycache__/azure.cpython-310.pyc,,
117
+ openai/lib/_old_api.py,sha256=XZnXBrEKuTd70iJirj5mGW35fZoqruJobbBTq6bvg10,1947
118
+ openai/lib/_parsing/__init__.py,sha256=wS3BYvMGj9TqiPqOe3rO1sleaAJqHVuCaQuCE5rZIUw,539
119
+ openai/lib/_parsing/__pycache__/__init__.cpython-310.pyc,,
120
+ openai/lib/_parsing/__pycache__/_completions.cpython-310.pyc,,
121
+ openai/lib/_parsing/_completions.py,sha256=I1KpjdI9p8Me-nsLF2szjEYF_7x4k28WGH5GdZeKpzI,9138
122
+ openai/lib/_pydantic.py,sha256=Lvd-6S5WiEPvwewOqNarDiGJ_ZPtkez9W28ZLcB-K_c,5336
123
+ openai/lib/_tools.py,sha256=xrzM7jNgehZGsRQ9kSgn1q33z9cHrgf0b8UMo5wrTFw,1501
124
+ openai/lib/_validators.py,sha256=cXJXFuaAl7jeJcYHXXnFa4NHGtHs-_zt3Zs1VVCmQo4,35288
125
+ openai/lib/azure.py,sha256=8rGDip2BVCTvZnvaq_fT8pGQZ3479-JP6oL9WtI5NpM,23563
126
+ openai/lib/streaming/__init__.py,sha256=kD3LpjsqU7caDQDhB-YjTUl9qqbb5sPnGGSI2yQYC70,379
127
+ openai/lib/streaming/__pycache__/__init__.cpython-310.pyc,,
128
+ openai/lib/streaming/__pycache__/_assistants.cpython-310.pyc,,
129
+ openai/lib/streaming/__pycache__/_deltas.cpython-310.pyc,,
130
+ openai/lib/streaming/_assistants.py,sha256=LUWSinmYopQIkQ5xSg73b6BWbkRkQS5JvX62w_V9xSw,40692
131
+ openai/lib/streaming/_deltas.py,sha256=I7B_AznXZwlBmE8Puau7ayTQUx6hMIEVE8FYTQm2fjs,2502
132
+ openai/lib/streaming/chat/__init__.py,sha256=7krL_atOvvpQkY_byWSglSfDsMs5hdoxHmz4Ulq7lcc,1305
133
+ openai/lib/streaming/chat/__pycache__/__init__.cpython-310.pyc,,
134
+ openai/lib/streaming/chat/__pycache__/_completions.cpython-310.pyc,,
135
+ openai/lib/streaming/chat/__pycache__/_events.cpython-310.pyc,,
136
+ openai/lib/streaming/chat/__pycache__/_types.cpython-310.pyc,,
137
+ openai/lib/streaming/chat/_completions.py,sha256=icXzr6TwaQvOOEZHRLIfw106YVUT9mLGjQt6QJ1ObKI,29944
138
+ openai/lib/streaming/chat/_events.py,sha256=lstVmM6YR2Cs9drikzrY9JCZn9Nbfym0aKIPtNpxL6w,2618
139
+ openai/lib/streaming/chat/_types.py,sha256=-SYVBNhGkOUoJ-8dotxpCRqPJpfyOQ8hwR2_HrsQCRI,739
140
+ openai/pagination.py,sha256=B9ejXEAR_hYGLHfqb9xEEsE0u5dCUMjvplOce5dpY7M,2760
141
+ openai/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
142
+ openai/resources/__init__.py,sha256=eYonVyf6AAmk-b8JYSYmo5EEMv89ovxiAY5A83ti8J8,4533
143
+ openai/resources/__pycache__/__init__.cpython-310.pyc,,
144
+ openai/resources/__pycache__/batches.cpython-310.pyc,,
145
+ openai/resources/__pycache__/completions.cpython-310.pyc,,
146
+ openai/resources/__pycache__/embeddings.cpython-310.pyc,,
147
+ openai/resources/__pycache__/files.cpython-310.pyc,,
148
+ openai/resources/__pycache__/images.cpython-310.pyc,,
149
+ openai/resources/__pycache__/models.cpython-310.pyc,,
150
+ openai/resources/__pycache__/moderations.cpython-310.pyc,,
151
+ openai/resources/audio/__init__.py,sha256=YM7FHvPKVlj_v6EIgfpUQsb6q4hS2hVQ3gfkgic0sP0,1687
152
+ openai/resources/audio/__pycache__/__init__.cpython-310.pyc,,
153
+ openai/resources/audio/__pycache__/audio.cpython-310.pyc,,
154
+ openai/resources/audio/__pycache__/speech.cpython-310.pyc,,
155
+ openai/resources/audio/__pycache__/transcriptions.cpython-310.pyc,,
156
+ openai/resources/audio/__pycache__/translations.cpython-310.pyc,,
157
+ openai/resources/audio/audio.py,sha256=MMJHbfXmyYmQU7dF8XsD0YOIqdlG3gtxUqTihOuVx8o,5499
158
+ openai/resources/audio/speech.py,sha256=yPoi_Xozv0Yuikbf2dxhAyRdN2q_sWDQoHNCxUayC-E,8903
159
+ openai/resources/audio/transcriptions.py,sha256=4X71pe1lvelNRPSlHy2jAtIMyETYwWieLShBdr12MN0,18507
160
+ openai/resources/audio/translations.py,sha256=4Y-ognKnSi72qhwX8FCKB-5JhvaAS2Wnq2ivTFmpUoU,15711
161
+ openai/resources/batches.py,sha256=8wb-oy81IkxpABjT_11JKP7nzTmGmP35lD6WGecWmn8,19578
162
+ openai/resources/beta/__init__.py,sha256=nXoV4P8WCrbEZuNMtptbIuy_LqlVafY9lJ2qfW35GFc,1636
163
+ openai/resources/beta/__pycache__/__init__.cpython-310.pyc,,
164
+ openai/resources/beta/__pycache__/assistants.cpython-310.pyc,,
165
+ openai/resources/beta/__pycache__/beta.cpython-310.pyc,,
166
+ openai/resources/beta/assistants.py,sha256=j1BE3q4aCGzridJ8wyhzn0FeI3Gvy56jRK57EA-SuXk,40533
167
+ openai/resources/beta/beta.py,sha256=D9mhIg_Qc0tUq23AVRUI6Z1WRF_ekeNG5sHeRYyhFXk,6602
168
+ openai/resources/beta/chat/__init__.py,sha256=d_fpyFMAG3iRAPIXANPfRG4HtEm6U_uMUYep7Skj2uY,263
169
+ openai/resources/beta/chat/__pycache__/__init__.cpython-310.pyc,,
170
+ openai/resources/beta/chat/__pycache__/chat.cpython-310.pyc,,
171
+ openai/resources/beta/chat/__pycache__/completions.cpython-310.pyc,,
172
+ openai/resources/beta/chat/chat.py,sha256=sNvU8Fi_o3dWkD_X4Mobafv9XWBP6Y2dJxng-NdFXUs,597
173
+ openai/resources/beta/chat/completions.py,sha256=Z_x_hxpemrmROMrfyx6dUALppPuqNgswgW9YQ3ngHYI,28553
174
+ openai/resources/beta/realtime/__init__.py,sha256=0TBjHlLRsG-hudbiE8f-EXETNkDRAxqkCVAgODiUnYo,862
175
+ openai/resources/beta/realtime/__pycache__/__init__.cpython-310.pyc,,
176
+ openai/resources/beta/realtime/__pycache__/realtime.cpython-310.pyc,,
177
+ openai/resources/beta/realtime/__pycache__/sessions.cpython-310.pyc,,
178
+ openai/resources/beta/realtime/realtime.py,sha256=iRKjT29BT2LEbc909_wtQ8mmr9lKstibKd4DZm0BWEM,37482
179
+ openai/resources/beta/realtime/sessions.py,sha256=i53-QVMaqK3sGP22gh250kANFlRbP4V-g8uffWzKHS8,16093
180
+ openai/resources/beta/threads/__init__.py,sha256=fQ_qdUVSfouVS5h47DlTb5mamChT4K-v-siPuuAB6do,1177
181
+ openai/resources/beta/threads/__pycache__/__init__.cpython-310.pyc,,
182
+ openai/resources/beta/threads/__pycache__/messages.cpython-310.pyc,,
183
+ openai/resources/beta/threads/__pycache__/threads.cpython-310.pyc,,
184
+ openai/resources/beta/threads/messages.py,sha256=LBjgJAK-0g_lkhIX2WG6qNT0RzSTknO0nRlqkVQw-B8,27372
185
+ openai/resources/beta/threads/runs/__init__.py,sha256=2FfDaqwmJJCd-IVpY_CrzWcFvw0KFyQ3cm5jnTfI-DQ,771
186
+ openai/resources/beta/threads/runs/__pycache__/__init__.cpython-310.pyc,,
187
+ openai/resources/beta/threads/runs/__pycache__/runs.cpython-310.pyc,,
188
+ openai/resources/beta/threads/runs/__pycache__/steps.cpython-310.pyc,,
189
+ openai/resources/beta/threads/runs/runs.py,sha256=7sPjaxa8Th6aXDeils1G8VKA9_2wsyjGUs5kJh3M50I,142593
190
+ openai/resources/beta/threads/runs/steps.py,sha256=VlGD9NXtNqOt3uwlnepCavW7v3uVlvvyi0X1h9WZ_-E,15817
191
+ openai/resources/beta/threads/threads.py,sha256=qGh4H0-42NhJHwPpyAYZlGx1ZgssFARJ45fhEDCyDQU,94238
192
+ openai/resources/beta/vector_stores/__init__.py,sha256=11Xn1vhgndWiI0defJHv31vmbtbDgh2GwZT3gX8GgHk,1296
193
+ openai/resources/beta/vector_stores/__pycache__/__init__.cpython-310.pyc,,
194
+ openai/resources/beta/vector_stores/__pycache__/file_batches.cpython-310.pyc,,
195
+ openai/resources/beta/vector_stores/__pycache__/files.cpython-310.pyc,,
196
+ openai/resources/beta/vector_stores/__pycache__/vector_stores.cpython-310.pyc,,
197
+ openai/resources/beta/vector_stores/file_batches.py,sha256=EomxymvX4oCIRXUAfKGShAYWqnv1vlAahcp_Wa7Kt7Y,31985
198
+ openai/resources/beta/vector_stores/files.py,sha256=LjN6Zazb4dGV-xeQ-XRKAVciXsFj7LXh90AKJgVQ-Cw,29724
199
+ openai/resources/beta/vector_stores/vector_stores.py,sha256=OnzaEjKov8npQQf9YSYljPOTNBzjfwmxfW_D7f7fLkQ,28916
200
+ openai/resources/chat/__init__.py,sha256=8Q9ODRo1wIpFa34VaNwuaWFmxqFxagDtUhIAkQNvxEU,849
201
+ openai/resources/chat/__pycache__/__init__.cpython-310.pyc,,
202
+ openai/resources/chat/__pycache__/chat.cpython-310.pyc,,
203
+ openai/resources/chat/__pycache__/completions.cpython-310.pyc,,
204
+ openai/resources/chat/chat.py,sha256=hvYn24it5ARq8BYloSWn5kqqSlBEcYvVdQTf3ujxuV0,3360
205
+ openai/resources/chat/completions.py,sha256=VL61UVRPoI7JuNj6b4k4G2g8Ew0mu2WfLJbtUbW_XuM,99603
206
+ openai/resources/completions.py,sha256=5W3UuTH0V-vpTIkb8-r7gyS0Qp7tx3JZMWZkHBGIjPY,59460
207
+ openai/resources/embeddings.py,sha256=PfwI3PKKPkmLs7wHijO-1pOwW6Fjs5Rqzpy0ALLYgAs,11655
208
+ openai/resources/files.py,sha256=PL7b1lM7s3uJD7CvZcM_9f54kAlhBo913o31z1uXt-0,30093
209
+ openai/resources/fine_tuning/__init__.py,sha256=s6uoq7gM4gwoywdOOZQkPeYiSbUl-OwpeuMhwJJk0lc,837
210
+ openai/resources/fine_tuning/__pycache__/__init__.cpython-310.pyc,,
211
+ openai/resources/fine_tuning/__pycache__/fine_tuning.cpython-310.pyc,,
212
+ openai/resources/fine_tuning/fine_tuning.py,sha256=yfXXcR8IMRHkS-xnoT_nF7WEa2fjprDO-0ND-juPqhk,3394
213
+ openai/resources/fine_tuning/jobs/__init__.py,sha256=_smlrwijZOCcsDWqKnofLxQM2QLucZzXgboL9zJBPHw,849
214
+ openai/resources/fine_tuning/jobs/__pycache__/__init__.cpython-310.pyc,,
215
+ openai/resources/fine_tuning/jobs/__pycache__/checkpoints.cpython-310.pyc,,
216
+ openai/resources/fine_tuning/jobs/__pycache__/jobs.cpython-310.pyc,,
217
+ openai/resources/fine_tuning/jobs/checkpoints.py,sha256=LIJUhxb8hgxEgHdTFKdyb0Q-hnV4ccIprvFpQJI97ho,7474
218
+ openai/resources/fine_tuning/jobs/jobs.py,sha256=kZLZaWRW6ynhLknoOaK64LW9XifzsSOpFHWX8VPjJcs,29392
219
+ openai/resources/images.py,sha256=PS7PIe1X8tccsqLtd-4kx1OTzCow0S-C-L29bmVyV4c,25634
220
+ openai/resources/models.py,sha256=qJj0Cpy_Ok9ELag8VxqTefX8tw7RPgIZ8-a6qllxl8w,11240
221
+ openai/resources/moderations.py,sha256=H9tygVKuT1c25LW_XyrhpK9nlT72SsEYDiPolQBP7hs,7805
222
+ openai/resources/uploads/__init__.py,sha256=HmY3WQgvUI2bN3CjfWHWQOk7UUC6Ozna97_lHhrrRSA,810
223
+ openai/resources/uploads/__pycache__/__init__.cpython-310.pyc,,
224
+ openai/resources/uploads/__pycache__/parts.cpython-310.pyc,,
225
+ openai/resources/uploads/__pycache__/uploads.cpython-310.pyc,,
226
+ openai/resources/uploads/parts.py,sha256=NEMRVCqOOYJV2zTmBau9UtY2qXuB_yDJzzXTJ1XubUY,8150
227
+ openai/resources/uploads/uploads.py,sha256=ft7cVZuDxphjdCV6BcS6Zs2qE3zD1RB57udvaGUR9HY,24918
228
+ openai/types/__init__.py,sha256=GxEEa9qy8CKZVCU1wY4PokDUCq-fD_GwZxRsBxzC_-s,3177
229
+ openai/types/__pycache__/__init__.cpython-310.pyc,,
230
+ openai/types/__pycache__/audio_model.cpython-310.pyc,,
231
+ openai/types/__pycache__/audio_response_format.cpython-310.pyc,,
232
+ openai/types/__pycache__/batch.cpython-310.pyc,,
233
+ openai/types/__pycache__/batch_create_params.cpython-310.pyc,,
234
+ openai/types/__pycache__/batch_error.cpython-310.pyc,,
235
+ openai/types/__pycache__/batch_list_params.cpython-310.pyc,,
236
+ openai/types/__pycache__/batch_request_counts.cpython-310.pyc,,
237
+ openai/types/__pycache__/chat_model.cpython-310.pyc,,
238
+ openai/types/__pycache__/completion.cpython-310.pyc,,
239
+ openai/types/__pycache__/completion_choice.cpython-310.pyc,,
240
+ openai/types/__pycache__/completion_create_params.cpython-310.pyc,,
241
+ openai/types/__pycache__/completion_usage.cpython-310.pyc,,
242
+ openai/types/__pycache__/create_embedding_response.cpython-310.pyc,,
243
+ openai/types/__pycache__/embedding.cpython-310.pyc,,
244
+ openai/types/__pycache__/embedding_create_params.cpython-310.pyc,,
245
+ openai/types/__pycache__/embedding_model.cpython-310.pyc,,
246
+ openai/types/__pycache__/file_content.cpython-310.pyc,,
247
+ openai/types/__pycache__/file_create_params.cpython-310.pyc,,
248
+ openai/types/__pycache__/file_deleted.cpython-310.pyc,,
249
+ openai/types/__pycache__/file_list_params.cpython-310.pyc,,
250
+ openai/types/__pycache__/file_object.cpython-310.pyc,,
251
+ openai/types/__pycache__/file_purpose.cpython-310.pyc,,
252
+ openai/types/__pycache__/image.cpython-310.pyc,,
253
+ openai/types/__pycache__/image_create_variation_params.cpython-310.pyc,,
254
+ openai/types/__pycache__/image_edit_params.cpython-310.pyc,,
255
+ openai/types/__pycache__/image_generate_params.cpython-310.pyc,,
256
+ openai/types/__pycache__/image_model.cpython-310.pyc,,
257
+ openai/types/__pycache__/images_response.cpython-310.pyc,,
258
+ openai/types/__pycache__/model.cpython-310.pyc,,
259
+ openai/types/__pycache__/model_deleted.cpython-310.pyc,,
260
+ openai/types/__pycache__/moderation.cpython-310.pyc,,
261
+ openai/types/__pycache__/moderation_create_params.cpython-310.pyc,,
262
+ openai/types/__pycache__/moderation_create_response.cpython-310.pyc,,
263
+ openai/types/__pycache__/moderation_image_url_input_param.cpython-310.pyc,,
264
+ openai/types/__pycache__/moderation_model.cpython-310.pyc,,
265
+ openai/types/__pycache__/moderation_multi_modal_input_param.cpython-310.pyc,,
266
+ openai/types/__pycache__/moderation_text_input_param.cpython-310.pyc,,
267
+ openai/types/__pycache__/upload.cpython-310.pyc,,
268
+ openai/types/__pycache__/upload_complete_params.cpython-310.pyc,,
269
+ openai/types/__pycache__/upload_create_params.cpython-310.pyc,,
270
+ openai/types/__pycache__/websocket_connection_options.cpython-310.pyc,,
271
+ openai/types/audio/__init__.py,sha256=sR9_rMb-gO0stG4ozTq6XJs714C_BfjB3KCgFvyhXVA,1050
272
+ openai/types/audio/__pycache__/__init__.cpython-310.pyc,,
273
+ openai/types/audio/__pycache__/speech_create_params.cpython-310.pyc,,
274
+ openai/types/audio/__pycache__/speech_model.cpython-310.pyc,,
275
+ openai/types/audio/__pycache__/transcription.cpython-310.pyc,,
276
+ openai/types/audio/__pycache__/transcription_create_params.cpython-310.pyc,,
277
+ openai/types/audio/__pycache__/transcription_create_response.cpython-310.pyc,,
278
+ openai/types/audio/__pycache__/transcription_segment.cpython-310.pyc,,
279
+ openai/types/audio/__pycache__/transcription_verbose.cpython-310.pyc,,
280
+ openai/types/audio/__pycache__/transcription_word.cpython-310.pyc,,
281
+ openai/types/audio/__pycache__/translation.cpython-310.pyc,,
282
+ openai/types/audio/__pycache__/translation_create_params.cpython-310.pyc,,
283
+ openai/types/audio/__pycache__/translation_create_response.cpython-310.pyc,,
284
+ openai/types/audio/__pycache__/translation_verbose.cpython-310.pyc,,
285
+ openai/types/audio/speech_create_params.py,sha256=-iUZ3a-BGlg46IFsP_vcJBTRuK_pXruF0KJsbNn0mgU,1300
286
+ openai/types/audio/speech_model.py,sha256=RUimvc__LYAxwEEmfrf-lj18O3EWrU1OlWZXEXN2AKY,218
287
+ openai/types/audio/transcription.py,sha256=FP9QMwwwdqgvP3xY9P-40gBiFmMwFKxXM5yv5x8xPVk,230
288
+ openai/types/audio/transcription_create_params.py,sha256=OP8fXaYYsi5HWi0E7HR5HIRihglsuBqeJWglxkNxLts,2264
289
+ openai/types/audio/transcription_create_response.py,sha256=-PLGH8he9EdJtvBXV-ZrE31CLVnk4bc0VQ1ixRoN8Ck,378
290
+ openai/types/audio/transcription_segment.py,sha256=-pPAGolwIIXUBMic-H5U7aR0u_Aq-pipSA4xTtn_viA,1153
291
+ openai/types/audio/transcription_verbose.py,sha256=tlVK8JzyvkslQOvpAb19PmsfiRBqmbne0l-GqFmVIMU,758
292
+ openai/types/audio/transcription_word.py,sha256=sNDdtjoqIiba6qKsD_lI2Ffs1Lr7qP9HyS59AFh5cTc,368
293
+ openai/types/audio/translation.py,sha256=5l-Zk9Cg7AZti-TTn2-4ydsoZj2zdvDwyzzVjVp9W0g,194
294
+ openai/types/audio/translation_create_params.py,sha256=lFQEh5IRG5XT-Z3TV7FDSNbIRqAt6yA3EsSvSsb0wsU,1585
295
+ openai/types/audio/translation_create_response.py,sha256=x6H0yjTbZR3vd3d7LdABcn9nrMDNdeMjepcjW1oUfVc,362
296
+ openai/types/audio/translation_verbose.py,sha256=ic6h7_fAKlyrJuCgbd4Vtr0pk9OnynQK_uobD9lAGZo,613
297
+ openai/types/audio_model.py,sha256=pxBVwf1HGd6mW-_jd-TDVMRZtTvvCUn_rL8Pt1BXzuo,208
298
+ openai/types/audio_response_format.py,sha256=EEItnQdwXinG8bOe1We2039Z7lp2Z8wSXXvTlFlkXzM,259
299
+ openai/types/batch.py,sha256=Dq7btfgIT4b2yfh0knZTzAL4yFx_l95H5KLfDPO8iig,2788
300
+ openai/types/batch_create_params.py,sha256=VXpg3mK2xwsUAIbYcFHFgRgLMrN3iBgW8l5rslk0gvQ,1441
301
+ openai/types/batch_error.py,sha256=Xxl-gYm0jerpYyI-mKSSVxRMQRubkoLUiOP9U3v72EM,622
302
+ openai/types/batch_list_params.py,sha256=X1_sfRspuIMSDyXWVh0YnJ9vJLeOOH66TrvgEHueC84,705
303
+ openai/types/batch_request_counts.py,sha256=GHHrJKdJwJ3foBa1j9v5Vece_zzkdXXXgOcne8W1E30,409
304
+ openai/types/beta/__init__.py,sha256=CbOOxDPXvdK5RInCcEiBihJ2XgaUhdm3NMBBwx90OHc,3462
305
+ openai/types/beta/__pycache__/__init__.cpython-310.pyc,,
306
+ openai/types/beta/__pycache__/assistant.cpython-310.pyc,,
307
+ openai/types/beta/__pycache__/assistant_create_params.cpython-310.pyc,,
308
+ openai/types/beta/__pycache__/assistant_deleted.cpython-310.pyc,,
309
+ openai/types/beta/__pycache__/assistant_list_params.cpython-310.pyc,,
310
+ openai/types/beta/__pycache__/assistant_response_format_option.cpython-310.pyc,,
311
+ openai/types/beta/__pycache__/assistant_response_format_option_param.cpython-310.pyc,,
312
+ openai/types/beta/__pycache__/assistant_stream_event.cpython-310.pyc,,
313
+ openai/types/beta/__pycache__/assistant_tool.cpython-310.pyc,,
314
+ openai/types/beta/__pycache__/assistant_tool_choice.cpython-310.pyc,,
315
+ openai/types/beta/__pycache__/assistant_tool_choice_function.cpython-310.pyc,,
316
+ openai/types/beta/__pycache__/assistant_tool_choice_function_param.cpython-310.pyc,,
317
+ openai/types/beta/__pycache__/assistant_tool_choice_option.cpython-310.pyc,,
318
+ openai/types/beta/__pycache__/assistant_tool_choice_option_param.cpython-310.pyc,,
319
+ openai/types/beta/__pycache__/assistant_tool_choice_param.cpython-310.pyc,,
320
+ openai/types/beta/__pycache__/assistant_tool_param.cpython-310.pyc,,
321
+ openai/types/beta/__pycache__/assistant_update_params.cpython-310.pyc,,
322
+ openai/types/beta/__pycache__/auto_file_chunking_strategy_param.cpython-310.pyc,,
323
+ openai/types/beta/__pycache__/code_interpreter_tool.cpython-310.pyc,,
324
+ openai/types/beta/__pycache__/code_interpreter_tool_param.cpython-310.pyc,,
325
+ openai/types/beta/__pycache__/file_chunking_strategy.cpython-310.pyc,,
326
+ openai/types/beta/__pycache__/file_chunking_strategy_param.cpython-310.pyc,,
327
+ openai/types/beta/__pycache__/file_search_tool.cpython-310.pyc,,
328
+ openai/types/beta/__pycache__/file_search_tool_param.cpython-310.pyc,,
329
+ openai/types/beta/__pycache__/function_tool.cpython-310.pyc,,
330
+ openai/types/beta/__pycache__/function_tool_param.cpython-310.pyc,,
331
+ openai/types/beta/__pycache__/other_file_chunking_strategy_object.cpython-310.pyc,,
332
+ openai/types/beta/__pycache__/static_file_chunking_strategy.cpython-310.pyc,,
333
+ openai/types/beta/__pycache__/static_file_chunking_strategy_object.cpython-310.pyc,,
334
+ openai/types/beta/__pycache__/static_file_chunking_strategy_param.cpython-310.pyc,,
335
+ openai/types/beta/__pycache__/thread.cpython-310.pyc,,
336
+ openai/types/beta/__pycache__/thread_create_and_run_params.cpython-310.pyc,,
337
+ openai/types/beta/__pycache__/thread_create_params.cpython-310.pyc,,
338
+ openai/types/beta/__pycache__/thread_deleted.cpython-310.pyc,,
339
+ openai/types/beta/__pycache__/thread_update_params.cpython-310.pyc,,
340
+ openai/types/beta/__pycache__/vector_store.cpython-310.pyc,,
341
+ openai/types/beta/__pycache__/vector_store_create_params.cpython-310.pyc,,
342
+ openai/types/beta/__pycache__/vector_store_deleted.cpython-310.pyc,,
343
+ openai/types/beta/__pycache__/vector_store_list_params.cpython-310.pyc,,
344
+ openai/types/beta/__pycache__/vector_store_update_params.cpython-310.pyc,,
345
+ openai/types/beta/assistant.py,sha256=3w8FpWceagZoKuEQrGeitoosTrz-Z24IPiL-viWC4I4,4936
346
+ openai/types/beta/assistant_create_params.py,sha256=Y5LoiGU9ZTWQ87KaYyrqN1TsMFT4iYsBvMNeDgciRd4,5986
347
+ openai/types/beta/assistant_deleted.py,sha256=bTTUl5FPHTBI5nRm7d0sGuR9VCSBDZ-IbOn9G_IpmJQ,301
348
+ openai/types/beta/assistant_list_params.py,sha256=yW-lj6AUkG0IRZQKre0veEr9p4VMN-9YdELFMYs74Cw,1222
349
+ openai/types/beta/assistant_response_format_option.py,sha256=yNeoAWxM-_8Sjmwqu8exqyKRFhVZIKeTypetPY55VFA,561
350
+ openai/types/beta/assistant_response_format_option_param.py,sha256=dyPMhwRSLBZ0ltpxiD7KM-9X6BzWnbGeG-nT_3SenuQ,628
351
+ openai/types/beta/assistant_stream_event.py,sha256=vP4LDqYWzSKGcZ1JAfyNw7YqC__XsVPe0nqZ2qdn93E,6930
352
+ openai/types/beta/assistant_tool.py,sha256=_0FC7Db4Ctq_0yLaKJ93zNTB5HthuJWEAHx3fadDRlw,506
353
+ openai/types/beta/assistant_tool_choice.py,sha256=Hy4HIfPQCkWD8VruHHicuTkomNwljGHviQHk36prKhg,544
354
+ openai/types/beta/assistant_tool_choice_function.py,sha256=aYMlVrZdX2JxmehDlyGALRK2PIEkO7VFEfsvY3VH6T4,270
355
+ openai/types/beta/assistant_tool_choice_function_param.py,sha256=-O38277LhSaqOVhTp0haHP0ZnVTLpEBvcLJa5MRo7wE,355
356
+ openai/types/beta/assistant_tool_choice_option.py,sha256=jrXMd_IYIQ1pt8Lkc-KrPd4CR3lR8sFV4m7_lpG8A4Y,362
357
+ openai/types/beta/assistant_tool_choice_option_param.py,sha256=VcatO5Nej9e5eqfrwetG4uM1vFoewnBEcFz47IxAK2E,424
358
+ openai/types/beta/assistant_tool_choice_param.py,sha256=NOWx9SzZEwYaHeAyFZTQlG3pmogMNXzjPJDGQUlbv7Q,572
359
+ openai/types/beta/assistant_tool_param.py,sha256=6DcaU3nMjurur2VkVIYcCaRAY1QLQscXXjCd0ZHHGho,501
360
+ openai/types/beta/assistant_update_params.py,sha256=XsLdjYNx7dbPr1aqDu0_ZGuXjgU0JVuM0waJo1NskyI,4684
361
+ openai/types/beta/auto_file_chunking_strategy_param.py,sha256=hbBtARkJXSJE7_4RqC-ZR3NiztUp9S4WuG3s3W0GpqY,351
362
+ openai/types/beta/chat/__init__.py,sha256=OKfJYcKb4NObdiRObqJV_dOyDQ8feXekDUge2o_4pXQ,122
363
+ openai/types/beta/chat/__pycache__/__init__.cpython-310.pyc,,
364
+ openai/types/beta/code_interpreter_tool.py,sha256=7mgQc9OtD_ZUnZeNhoobMFcmmvtZPFCNYGB-PEnNnfs,333
365
+ openai/types/beta/code_interpreter_tool_param.py,sha256=X6mwzFyZx1RCKEYbBCPs4kh_tZkxFxydPMK4yFNJkLs,389
366
+ openai/types/beta/file_chunking_strategy.py,sha256=6nRvYetBl_BHgN8biTyTut-tw8G13YttgxSKtJsJLeM,560
367
+ openai/types/beta/file_chunking_strategy_param.py,sha256=P0x4I2hB_ylbSxFFEmRqgwto3HQQsHIokX3U0is_a9s,498
368
+ openai/types/beta/file_search_tool.py,sha256=5aNU8RZj-UNdmuqqpjCXNaa1pI9GzSP5qCPtvVSJ1oQ,1769
369
+ openai/types/beta/file_search_tool_param.py,sha256=o6sWPrzRYY8wtNaVuF8h3D1sAQV3N0L3dbdiiaMisW0,1765
370
+ openai/types/beta/function_tool.py,sha256=oYGJfcfPpUohKw2ikgshDjOI1HXCK-5pAWyegYNezeU,397
371
+ openai/types/beta/function_tool_param.py,sha256=hCclpGO4Re-TxiGy_QxX75g1kcN6_ElubicO6SdJ_YI,471
372
+ openai/types/beta/other_file_chunking_strategy_object.py,sha256=hJz1OeSkvvcWJVftPfvz2pB5ujdawWEEa3v38E6tt7g,311
373
+ openai/types/beta/realtime/__init__.py,sha256=OJOsvJMLlDqJEJClien1XwN8K6vhnyVtNgN1qolZeW0,6167
374
+ openai/types/beta/realtime/__pycache__/__init__.cpython-310.pyc,,
375
+ openai/types/beta/realtime/__pycache__/conversation_created_event.cpython-310.pyc,,
376
+ openai/types/beta/realtime/__pycache__/conversation_item.cpython-310.pyc,,
377
+ openai/types/beta/realtime/__pycache__/conversation_item_content.cpython-310.pyc,,
378
+ openai/types/beta/realtime/__pycache__/conversation_item_content_param.cpython-310.pyc,,
379
+ openai/types/beta/realtime/__pycache__/conversation_item_create_event.cpython-310.pyc,,
380
+ openai/types/beta/realtime/__pycache__/conversation_item_create_event_param.cpython-310.pyc,,
381
+ openai/types/beta/realtime/__pycache__/conversation_item_created_event.cpython-310.pyc,,
382
+ openai/types/beta/realtime/__pycache__/conversation_item_delete_event.cpython-310.pyc,,
383
+ openai/types/beta/realtime/__pycache__/conversation_item_delete_event_param.cpython-310.pyc,,
384
+ openai/types/beta/realtime/__pycache__/conversation_item_deleted_event.cpython-310.pyc,,
385
+ openai/types/beta/realtime/__pycache__/conversation_item_input_audio_transcription_completed_event.cpython-310.pyc,,
386
+ openai/types/beta/realtime/__pycache__/conversation_item_input_audio_transcription_failed_event.cpython-310.pyc,,
387
+ openai/types/beta/realtime/__pycache__/conversation_item_param.cpython-310.pyc,,
388
+ openai/types/beta/realtime/__pycache__/conversation_item_truncate_event.cpython-310.pyc,,
389
+ openai/types/beta/realtime/__pycache__/conversation_item_truncate_event_param.cpython-310.pyc,,
390
+ openai/types/beta/realtime/__pycache__/conversation_item_truncated_event.cpython-310.pyc,,
391
+ openai/types/beta/realtime/__pycache__/error_event.cpython-310.pyc,,
392
+ openai/types/beta/realtime/__pycache__/input_audio_buffer_append_event.cpython-310.pyc,,
393
+ openai/types/beta/realtime/__pycache__/input_audio_buffer_append_event_param.cpython-310.pyc,,
394
+ openai/types/beta/realtime/__pycache__/input_audio_buffer_clear_event.cpython-310.pyc,,
395
+ openai/types/beta/realtime/__pycache__/input_audio_buffer_clear_event_param.cpython-310.pyc,,
396
+ openai/types/beta/realtime/__pycache__/input_audio_buffer_cleared_event.cpython-310.pyc,,
397
+ openai/types/beta/realtime/__pycache__/input_audio_buffer_commit_event.cpython-310.pyc,,
398
+ openai/types/beta/realtime/__pycache__/input_audio_buffer_commit_event_param.cpython-310.pyc,,
399
+ openai/types/beta/realtime/__pycache__/input_audio_buffer_committed_event.cpython-310.pyc,,
400
+ openai/types/beta/realtime/__pycache__/input_audio_buffer_speech_started_event.cpython-310.pyc,,
401
+ openai/types/beta/realtime/__pycache__/input_audio_buffer_speech_stopped_event.cpython-310.pyc,,
402
+ openai/types/beta/realtime/__pycache__/rate_limits_updated_event.cpython-310.pyc,,
403
+ openai/types/beta/realtime/__pycache__/realtime_client_event.cpython-310.pyc,,
404
+ openai/types/beta/realtime/__pycache__/realtime_client_event_param.cpython-310.pyc,,
405
+ openai/types/beta/realtime/__pycache__/realtime_connect_params.cpython-310.pyc,,
406
+ openai/types/beta/realtime/__pycache__/realtime_response.cpython-310.pyc,,
407
+ openai/types/beta/realtime/__pycache__/realtime_response_status.cpython-310.pyc,,
408
+ openai/types/beta/realtime/__pycache__/realtime_response_usage.cpython-310.pyc,,
409
+ openai/types/beta/realtime/__pycache__/realtime_server_event.cpython-310.pyc,,
410
+ openai/types/beta/realtime/__pycache__/response_audio_delta_event.cpython-310.pyc,,
411
+ openai/types/beta/realtime/__pycache__/response_audio_done_event.cpython-310.pyc,,
412
+ openai/types/beta/realtime/__pycache__/response_audio_transcript_delta_event.cpython-310.pyc,,
413
+ openai/types/beta/realtime/__pycache__/response_audio_transcript_done_event.cpython-310.pyc,,
414
+ openai/types/beta/realtime/__pycache__/response_cancel_event.cpython-310.pyc,,
415
+ openai/types/beta/realtime/__pycache__/response_cancel_event_param.cpython-310.pyc,,
416
+ openai/types/beta/realtime/__pycache__/response_content_part_added_event.cpython-310.pyc,,
417
+ openai/types/beta/realtime/__pycache__/response_content_part_done_event.cpython-310.pyc,,
418
+ openai/types/beta/realtime/__pycache__/response_create_event.cpython-310.pyc,,
419
+ openai/types/beta/realtime/__pycache__/response_create_event_param.cpython-310.pyc,,
420
+ openai/types/beta/realtime/__pycache__/response_created_event.cpython-310.pyc,,
421
+ openai/types/beta/realtime/__pycache__/response_done_event.cpython-310.pyc,,
422
+ openai/types/beta/realtime/__pycache__/response_function_call_arguments_delta_event.cpython-310.pyc,,
423
+ openai/types/beta/realtime/__pycache__/response_function_call_arguments_done_event.cpython-310.pyc,,
424
+ openai/types/beta/realtime/__pycache__/response_output_item_added_event.cpython-310.pyc,,
425
+ openai/types/beta/realtime/__pycache__/response_output_item_done_event.cpython-310.pyc,,
426
+ openai/types/beta/realtime/__pycache__/response_text_delta_event.cpython-310.pyc,,
427
+ openai/types/beta/realtime/__pycache__/response_text_done_event.cpython-310.pyc,,
428
+ openai/types/beta/realtime/__pycache__/session.cpython-310.pyc,,
429
+ openai/types/beta/realtime/__pycache__/session_create_params.cpython-310.pyc,,
430
+ openai/types/beta/realtime/__pycache__/session_create_response.cpython-310.pyc,,
431
+ openai/types/beta/realtime/__pycache__/session_created_event.cpython-310.pyc,,
432
+ openai/types/beta/realtime/__pycache__/session_update_event.cpython-310.pyc,,
433
+ openai/types/beta/realtime/__pycache__/session_update_event_param.cpython-310.pyc,,
434
+ openai/types/beta/realtime/__pycache__/session_updated_event.cpython-310.pyc,,
435
+ openai/types/beta/realtime/conversation_created_event.py,sha256=U4-nesN8rAep2_25E2DrkXUMafQejj3NE_0llXKj5Y8,752
436
+ openai/types/beta/realtime/conversation_item.py,sha256=av6WCjWVuRxBjccmxv4j26cd3TCKURj2a7cf8uS3P3s,2297
437
+ openai/types/beta/realtime/conversation_item_content.py,sha256=dj0XAEPqj4UPVb3E2nIgb8bZBA-PRNK-E7o3des6wmw,1005
438
+ openai/types/beta/realtime/conversation_item_content_param.py,sha256=CKEwY9j6ApnvfsLKrdkEFfOW1CtxUWyY9OL-rIMUNaw,927
439
+ openai/types/beta/realtime/conversation_item_create_event.py,sha256=PNdOLjWMB2uc0tCm7QdWANXt7FWqKpgocnej2OiEjxw,976
440
+ openai/types/beta/realtime/conversation_item_create_event_param.py,sha256=L9e8U-3LITXlBuJ_FQfGhSDX3Jj7R3uWN1UiG7qDTec,996
441
+ openai/types/beta/realtime/conversation_item_created_event.py,sha256=DIeG7YQ5HdKrnbnorklB1Zfsz42yRdPKDOx5TPzfvw0,722
442
+ openai/types/beta/realtime/conversation_item_delete_event.py,sha256=p-O6R1Ku5pxZvaxhSi4YTPqLXS1SHhdLGgJuPQyPcHY,549
443
+ openai/types/beta/realtime/conversation_item_delete_event_param.py,sha256=a17h8Hd8MxUbXT6NQg8YpTr1ICt1ztRecpfukHw4g34,569
444
+ openai/types/beta/realtime/conversation_item_deleted_event.py,sha256=uWHSqX5ig550romSdhtROwrdQmdeN31Oz1Vpr9IuQFI,492
445
+ openai/types/beta/realtime/conversation_item_input_audio_transcription_completed_event.py,sha256=7tX1hI3g0SbrXGHcaC_Y1xAzhsoziReYwlqyA8ycB3E,764
446
+ openai/types/beta/realtime/conversation_item_input_audio_transcription_failed_event.py,sha256=xYNSBIyERQJ4P-5YoFF1VptfPa8JnJ0sWaH6LGsPow0,1077
447
+ openai/types/beta/realtime/conversation_item_param.py,sha256=x12A5-yjNWodFNJEnbHKY1WJzSzX9s7EQr2c5FuYKBQ,2177
448
+ openai/types/beta/realtime/conversation_item_truncate_event.py,sha256=1c2_BamaTkgD26eyGZJU5xwbz7lRHupqU2HqcK0VniI,943
449
+ openai/types/beta/realtime/conversation_item_truncate_event_param.py,sha256=hSnVOSMMtLf16nn4ISHkevYCfEsiN9kNcgxXRtHa8Kc,983
450
+ openai/types/beta/realtime/conversation_item_truncated_event.py,sha256=K4S35U85J-UNRba9nkm-7G1ReZu8gA8Sa1z0-Vlozc0,704
451
+ openai/types/beta/realtime/error_event.py,sha256=goNkorKXUHKiYVsVunEsnaRa6_3dsDKVtrxXQtzZCmk,877
452
+ openai/types/beta/realtime/input_audio_buffer_append_event.py,sha256=lTKWd_WFbtDAy6AdaCjeQYBV0dgHuVNNt_PbrtPB8tg,662
453
+ openai/types/beta/realtime/input_audio_buffer_append_event_param.py,sha256=XmN2bE6jBRrkKGVPJdnPjJql5dqMPqwbmFnxo-z22JE,682
454
+ openai/types/beta/realtime/input_audio_buffer_clear_event.py,sha256=7AfCQfMxZQ-UoQXF9edYKw5GcTELPcfvvJWWpuLS41c,489
455
+ openai/types/beta/realtime/input_audio_buffer_clear_event_param.py,sha256=y-zfWqJsh1n6r2i0MgLDpnNC4g1dq3GCS66Twfkng38,499
456
+ openai/types/beta/realtime/input_audio_buffer_cleared_event.py,sha256=j9gpm7aGVmrUt48wqtvBMN8NOgtvqHciegjXjOnWm7A,429
457
+ openai/types/beta/realtime/input_audio_buffer_commit_event.py,sha256=SLZR2xxRd6uO3IQL6-LuozkjROXiGyblKoHYQjwXk4I,493
458
+ openai/types/beta/realtime/input_audio_buffer_commit_event_param.py,sha256=B8agXC-rUl-D-RijJ5MeTLgw43qVYzmf2_2oAVokhLY,503
459
+ openai/types/beta/realtime/input_audio_buffer_committed_event.py,sha256=wXMxuXLw1jmT4e-FmTp6rSxcSc_4l55zO3gT7jI1Mp4,628
460
+ openai/types/beta/realtime/input_audio_buffer_speech_started_event.py,sha256=NVp60RUsLFtte9Ilknmu_5lRk2dZp_1fXCgGHd4EvSM,861
461
+ openai/types/beta/realtime/input_audio_buffer_speech_stopped_event.py,sha256=gszRuYQtAW8upIhd7CJZ7pxboDk-K7sqidjqxgf47q4,779
462
+ openai/types/beta/realtime/rate_limits_updated_event.py,sha256=kBnf_p-49Q_LNdJsj0R1Szi8R4TGYAAJ_KifLuuyFZw,949
463
+ openai/types/beta/realtime/realtime_client_event.py,sha256=TD_qJi1hNgvurWTUzG-xb27thuvUT2-2AK_pouAY3vc,1249
464
+ openai/types/beta/realtime/realtime_client_event_param.py,sha256=qNStVbW_imzF0F8qfEHHE07AZoPIQLvjcTw9mXu4mFY,1294
465
+ openai/types/beta/realtime/realtime_connect_params.py,sha256=AvTypkFCYmDn9qMeektVqij6cqzgovr3PpgpMalJoJ4,290
466
+ openai/types/beta/realtime/realtime_response.py,sha256=C-3ZTF_gy40eT1eaeWIfpBS3pQC5lv3XNM_mqiLtTWg,1505
467
+ openai/types/beta/realtime/realtime_response_status.py,sha256=gU-59Pr_58TRfMZqFzdCloc53e1qOnU4aaHY3yURUK8,1326
468
+ openai/types/beta/realtime/realtime_response_usage.py,sha256=6XOFjCjPWioHoICZ0Q8KXuUzktQugx6WuTz0O5UvzZg,1541
469
+ openai/types/beta/realtime/realtime_server_event.py,sha256=j8s9jdl5cARv3fVM5jEjo04f83FmNELPRS_lq5Ao_Q0,3512
470
+ openai/types/beta/realtime/response_audio_delta_event.py,sha256=UjbnK4u_WSNTOColZj8SmJgHnAc2H8iRXD76ZnPbz7E,742
471
+ openai/types/beta/realtime/response_audio_done_event.py,sha256=1XEWBPh1JiOgyr6V03mRt_3sLm0YFUq5ft1AhfFlNEg,679
472
+ openai/types/beta/realtime/response_audio_transcript_delta_event.py,sha256=HEVNQ_R2_Nyo6BvNvsliMnN__b17eVd2Jx5udRHg0Hg,773
473
+ openai/types/beta/realtime/response_audio_transcript_done_event.py,sha256=Cn5l4mJnKK3LeSN9qFL4LLqs1WOWg4kt1SaYThB-5c0,787
474
+ openai/types/beta/realtime/response_cancel_event.py,sha256=EKx8IZUISJHdl-_3tCdHtz2BINQ85Tq_ocadnsEGPSk,637
475
+ openai/types/beta/realtime/response_cancel_event_param.py,sha256=nidzBL83liHwyImiNGiz9Ad0V34EtFAQDw1utqcF6ns,630
476
+ openai/types/beta/realtime/response_content_part_added_event.py,sha256=a8-rm1NAwX685fk7GdT6Xi0Yr-JfeAkyUr94-RoFe34,1232
477
+ openai/types/beta/realtime/response_content_part_done_event.py,sha256=jO2TZygxPabbnEG9E1AfNP-JYJv1QtCMnCzgcZ_3n18,1190
478
+ openai/types/beta/realtime/response_create_event.py,sha256=rMqjpCY5C6-new7fmlnciUriLZv3GzgJdgRPWlaX58k,4493
479
+ openai/types/beta/realtime/response_create_event_param.py,sha256=zgKTR7n_0nPkxWPG0Og5DzlUZegYbIne1SvIsdW9WMU,4338
480
+ openai/types/beta/realtime/response_created_event.py,sha256=zZtHx-1YjehXxX6aNE88SFINDaKOBzpzejo6sTNjq9g,506
481
+ openai/types/beta/realtime/response_done_event.py,sha256=_yUPoECCli89iHLtV3NQkXQOW6Lc1JlxVPFw04ziBGY,494
482
+ openai/types/beta/realtime/response_function_call_arguments_delta_event.py,sha256=Yh2mQZDucfnTLiO8LRyG9r7zeS1sjwLcMF1JPMdTFJc,793
483
+ openai/types/beta/realtime/response_function_call_arguments_done_event.py,sha256=kxSPK6nbNWL6pxveY7zaNGgCkCXqyBFJPVYJrw9cbOw,793
484
+ openai/types/beta/realtime/response_output_item_added_event.py,sha256=-_BZjvAqcgv3NIz-EMhvYMxIwvcXTt68FVNp0pw09dI,713
485
+ openai/types/beta/realtime/response_output_item_done_event.py,sha256=0ClNVMZmeIxKghlEid9VGoWiZ97wp00hIdNnev4qBD8,709
486
+ openai/types/beta/realtime/response_text_delta_event.py,sha256=B1yyuc6iMOMoG5Wh6W5KoQNYtVD1vEm2cKqHnl2CuFQ,721
487
+ openai/types/beta/realtime/response_text_done_event.py,sha256=mPgVG6nWxwkZ3aZOX-JkVF7CpaWP5-bvtbxFrr4fK7g,724
488
+ openai/types/beta/realtime/session.py,sha256=OqdK0L7ugOYV0PT2XlixRERimneHIF7-oHUh1JWtK70,5388
489
+ openai/types/beta/realtime/session_create_params.py,sha256=ALVf2hYtcdZjl2A5LJyjiyPfqSFEAeTIVkELbeSaH-g,5161
490
+ openai/types/beta/realtime/session_create_response.py,sha256=iyovJfORab-aDJJKE8PN--VQeCBR3VlnyV1tf4qE-K0,5411
491
+ openai/types/beta/realtime/session_created_event.py,sha256=rTElnBlE7z1htmkdmpdPN4q_dUYS6Su4BkmsqO65hUc,489
492
+ openai/types/beta/realtime/session_update_event.py,sha256=VwRvNgu-otI5_0xnXso1gqlCEWFnqzrGq9-Kar_o71Q,5751
493
+ openai/types/beta/realtime/session_update_event_param.py,sha256=NmDaFhVTohrHi-yRd1x883NUjGH3N6ZWyfpfJ0tEpTQ,5573
494
+ openai/types/beta/realtime/session_updated_event.py,sha256=HyR-Pz3U9finVO-bUCvnmeqsANw-fceNvVqEIF6ey10,489
495
+ openai/types/beta/static_file_chunking_strategy.py,sha256=nHaLv70q1rencY2u8mqS7mW7X7enzHrc-zM9mg22dHw,597
496
+ openai/types/beta/static_file_chunking_strategy_object.py,sha256=aOPxudte299F0j3bzniXcKJ7j-w4ZfQpgFHTa3CFyZ8,425
497
+ openai/types/beta/static_file_chunking_strategy_param.py,sha256=kCMmgyOxO0XIF2wjCWjUXtyn9S6q_7mNmyUCauqrjsg,692
498
+ openai/types/beta/thread.py,sha256=9wxx6M26S7cilx5SKWjZnkHc7g222AIOhikd0WTJfwI,2014
499
+ openai/types/beta/thread_create_and_run_params.py,sha256=NHkj-IMm2WEqH82i9zxqgJqYkOVCBVXSpZcpl-SVznY,13175
500
+ openai/types/beta/thread_create_params.py,sha256=U0gNXfSltPqYF3GIGQ7dloolkz6nzuDimXF-V9wjzvo,4970
501
+ openai/types/beta/thread_deleted.py,sha256=MaYG_jZIjSiB9h_ZBiTtpMsRSwFKkCY83ziM5GO_oUk,292
502
+ openai/types/beta/thread_update_params.py,sha256=olIjwn1eD0H2AkjdDZC38lPdT5dp2ORSjavPA7pB_08,1751
503
+ openai/types/beta/threads/__init__.py,sha256=0WsJo0tXp08CgayozR7Tqc3b8sqzotWzvBun19CEIWc,3066
504
+ openai/types/beta/threads/__pycache__/__init__.cpython-310.pyc,,
505
+ openai/types/beta/threads/__pycache__/annotation.cpython-310.pyc,,
506
+ openai/types/beta/threads/__pycache__/annotation_delta.cpython-310.pyc,,
507
+ openai/types/beta/threads/__pycache__/file_citation_annotation.cpython-310.pyc,,
508
+ openai/types/beta/threads/__pycache__/file_citation_delta_annotation.cpython-310.pyc,,
509
+ openai/types/beta/threads/__pycache__/file_path_annotation.cpython-310.pyc,,
510
+ openai/types/beta/threads/__pycache__/file_path_delta_annotation.cpython-310.pyc,,
511
+ openai/types/beta/threads/__pycache__/image_file.cpython-310.pyc,,
512
+ openai/types/beta/threads/__pycache__/image_file_content_block.cpython-310.pyc,,
513
+ openai/types/beta/threads/__pycache__/image_file_content_block_param.cpython-310.pyc,,
514
+ openai/types/beta/threads/__pycache__/image_file_delta.cpython-310.pyc,,
515
+ openai/types/beta/threads/__pycache__/image_file_delta_block.cpython-310.pyc,,
516
+ openai/types/beta/threads/__pycache__/image_file_param.cpython-310.pyc,,
517
+ openai/types/beta/threads/__pycache__/image_url.cpython-310.pyc,,
518
+ openai/types/beta/threads/__pycache__/image_url_content_block.cpython-310.pyc,,
519
+ openai/types/beta/threads/__pycache__/image_url_content_block_param.cpython-310.pyc,,
520
+ openai/types/beta/threads/__pycache__/image_url_delta.cpython-310.pyc,,
521
+ openai/types/beta/threads/__pycache__/image_url_delta_block.cpython-310.pyc,,
522
+ openai/types/beta/threads/__pycache__/image_url_param.cpython-310.pyc,,
523
+ openai/types/beta/threads/__pycache__/message.cpython-310.pyc,,
524
+ openai/types/beta/threads/__pycache__/message_content.cpython-310.pyc,,
525
+ openai/types/beta/threads/__pycache__/message_content_delta.cpython-310.pyc,,
526
+ openai/types/beta/threads/__pycache__/message_content_part_param.cpython-310.pyc,,
527
+ openai/types/beta/threads/__pycache__/message_create_params.cpython-310.pyc,,
528
+ openai/types/beta/threads/__pycache__/message_deleted.cpython-310.pyc,,
529
+ openai/types/beta/threads/__pycache__/message_delta.cpython-310.pyc,,
530
+ openai/types/beta/threads/__pycache__/message_delta_event.cpython-310.pyc,,
531
+ openai/types/beta/threads/__pycache__/message_list_params.cpython-310.pyc,,
532
+ openai/types/beta/threads/__pycache__/message_update_params.cpython-310.pyc,,
533
+ openai/types/beta/threads/__pycache__/refusal_content_block.cpython-310.pyc,,
534
+ openai/types/beta/threads/__pycache__/refusal_delta_block.cpython-310.pyc,,
535
+ openai/types/beta/threads/__pycache__/required_action_function_tool_call.cpython-310.pyc,,
536
+ openai/types/beta/threads/__pycache__/run.cpython-310.pyc,,
537
+ openai/types/beta/threads/__pycache__/run_create_params.cpython-310.pyc,,
538
+ openai/types/beta/threads/__pycache__/run_list_params.cpython-310.pyc,,
539
+ openai/types/beta/threads/__pycache__/run_status.cpython-310.pyc,,
540
+ openai/types/beta/threads/__pycache__/run_submit_tool_outputs_params.cpython-310.pyc,,
541
+ openai/types/beta/threads/__pycache__/run_update_params.cpython-310.pyc,,
542
+ openai/types/beta/threads/__pycache__/text.cpython-310.pyc,,
543
+ openai/types/beta/threads/__pycache__/text_content_block.cpython-310.pyc,,
544
+ openai/types/beta/threads/__pycache__/text_content_block_param.cpython-310.pyc,,
545
+ openai/types/beta/threads/__pycache__/text_delta.cpython-310.pyc,,
546
+ openai/types/beta/threads/__pycache__/text_delta_block.cpython-310.pyc,,
547
+ openai/types/beta/threads/annotation.py,sha256=Ce3Y0mSodmYRkoqyhtyIdep6WfWew6KJJgtrENOnfek,462
548
+ openai/types/beta/threads/annotation_delta.py,sha256=iNsE-1Gn1yU0TlTHoxqKbOvPRUxWuXsF72qY_mMnWGY,510
549
+ openai/types/beta/threads/file_citation_annotation.py,sha256=0Rs1Sr-eCLQpLsu8-WwHG7kv5Ihud4kiHO1NL7xHO0s,595
550
+ openai/types/beta/threads/file_citation_delta_annotation.py,sha256=R87tcXkJ0RiH5UJo0Qknwk7X_c4qF1qvGsu2spOPx-I,873
551
+ openai/types/beta/threads/file_path_annotation.py,sha256=hNc4ebprJynqMG1yk0gLvgzTpjtVzgEbXriMZftkgew,552
552
+ openai/types/beta/threads/file_path_delta_annotation.py,sha256=RW9dgDF9Ggf357fPZ-vUu2ge3U-Hf11DVTr-ecklsBY,755
553
+ openai/types/beta/threads/image_file.py,sha256=QVXLiplb-CigZqdMZtXlmebXKt6tF74kI-3vHxe_qUE,707
554
+ openai/types/beta/threads/image_file_content_block.py,sha256=31I5trSERP2qLZpJ4ugZtIyta4DDoBhBvxkM4LovL3w,363
555
+ openai/types/beta/threads/image_file_content_block_param.py,sha256=3ryZ6AV-DLwWYVP2XSK11UHkvutTUollxn6z8BZ4rSA,445
556
+ openai/types/beta/threads/image_file_delta.py,sha256=nUJoSuP-3YyqqwBsmPJ0AqiQydz2FymVDCXQVkNYwOk,734
557
+ openai/types/beta/threads/image_file_delta_block.py,sha256=XJ2YVX_cq0OiNcGbNmXO0_dca1IvPockOvvoM7pDvbI,492
558
+ openai/types/beta/threads/image_file_param.py,sha256=BaKD31JPxQ5CjRfZ_0RcOG3lDTZeW_k85XCvwyctD54,717
559
+ openai/types/beta/threads/image_url.py,sha256=EzEK-CYoO0YyqFmejIPu7pMfTEgMmp5NFscsRd2pCos,592
560
+ openai/types/beta/threads/image_url_content_block.py,sha256=_sg3BWrtVGw-8XtAh15Rs4co6NCBB9Y3zCp_XOAz4U8,365
561
+ openai/types/beta/threads/image_url_content_block_param.py,sha256=RWzo5KkBiwvgJSviZl6JUlsfv3VQKIFr6cp9lhkLu8E,447
562
+ openai/types/beta/threads/image_url_delta.py,sha256=MXCp-OmuNT4njbWA9DWAbocP7pD3VpdcUy2wgeOjwm4,582
563
+ openai/types/beta/threads/image_url_delta_block.py,sha256=Jjdfub4g9ceNKF8GuuTIghOmYba2vEeX3320mg5PWIA,484
564
+ openai/types/beta/threads/image_url_param.py,sha256=VRLaxZf-wxnvAOcKGwyF_o6KEvwktBfE3B6KmYE5LZo,602
565
+ openai/types/beta/threads/message.py,sha256=aGWe0kiNv5sXUYheJ0o1KpTds4oTaeDmqot1PMStJCE,3295
566
+ openai/types/beta/threads/message_content.py,sha256=b8IC_EG28hcXk28z09EABfJwPkYZ7U-lTp_9ykdoxvU,630
567
+ openai/types/beta/threads/message_content_delta.py,sha256=o4Edlx9BtdH2Z4OMwGWWXex8wiijknNRihJ-wu8PDUQ,615
568
+ openai/types/beta/threads/message_content_part_param.py,sha256=RXrnoDP2-UMQHoR2jJvaT3JHrCeffLi6WzXzH05cDGI,550
569
+ openai/types/beta/threads/message_create_params.py,sha256=WYfc_-kc7lxcxdpwKCVT2Ei-5Jl_132uqOHMtXL92OE,1957
570
+ openai/types/beta/threads/message_deleted.py,sha256=DNnrSfGZ3kWEazmo4mVTdLhiKlIHxs-D8Ef5sNdHY1o,303
571
+ openai/types/beta/threads/message_delta.py,sha256=-kaRyvnIA8Yr2QV5jKRn15BU2Ni068a_WtWJ4PqlLfE,570
572
+ openai/types/beta/threads/message_delta_event.py,sha256=7SpE4Dd3Lrc_cm97SzBwZzGGhfLqiFViDeTRQz-5YmQ,579
573
+ openai/types/beta/threads/message_list_params.py,sha256=iuwzDccnViooUxHlq-WoE1FEJArNy5-zrYCoaNgVS8k,1296
574
+ openai/types/beta/threads/message_update_params.py,sha256=jTM_WDKDuPVJKlNKlT6J_UqQjgM2vrrD03ZhvHI5bSY,630
575
+ openai/types/beta/threads/refusal_content_block.py,sha256=qB9jrS2Wv9UQ7XXaIVKe62dTAU1WOnN3qenR_E43mhg,310
576
+ openai/types/beta/threads/refusal_delta_block.py,sha256=ZhgFC8KqA9LIwo_CQIX-w3VVg3Vj0h71xC1Hh1bwmnU,423
577
+ openai/types/beta/threads/required_action_function_tool_call.py,sha256=XsR4OBbxI-RWteLvhcLEDBan6eUUGvhLORFRKjPbsLg,888
578
+ openai/types/beta/threads/run.py,sha256=GR469hvbAlWTHL17MieCYxQfASyxaY1ZOe6Qbf0ORMI,8218
579
+ openai/types/beta/threads/run_create_params.py,sha256=KgltVibs_KnKsL3UaZyVJgb-6aUxct7CXUtqMdkTXTM,9670
580
+ openai/types/beta/threads/run_list_params.py,sha256=TgepSLrupUUtuQV2kbVcoGH1YA0FVUX9ESkszKuwyHY,1210
581
+ openai/types/beta/threads/run_status.py,sha256=OU1hzoyYXaRJ3lupX4YcZ-HZkTpctNE4tzAcp6X8Q9U,351
582
+ openai/types/beta/threads/run_submit_tool_outputs_params.py,sha256=cKiyD374BsZN_Oih5o5n5gOf_DYsxErVrbgxveNhmPI,1643
583
+ openai/types/beta/threads/run_update_params.py,sha256=EDYJO3YuH1IKjfR1xAaBtWFonNnyXJDYAnlaMnwyXo8,622
584
+ openai/types/beta/threads/runs/__init__.py,sha256=mg_roY9yL1bClJ8isizkQgHOAkN17iSdVr2m65iyBrs,1653
585
+ openai/types/beta/threads/runs/__pycache__/__init__.cpython-310.pyc,,
586
+ openai/types/beta/threads/runs/__pycache__/code_interpreter_logs.cpython-310.pyc,,
587
+ openai/types/beta/threads/runs/__pycache__/code_interpreter_output_image.cpython-310.pyc,,
588
+ openai/types/beta/threads/runs/__pycache__/code_interpreter_tool_call.cpython-310.pyc,,
589
+ openai/types/beta/threads/runs/__pycache__/code_interpreter_tool_call_delta.cpython-310.pyc,,
590
+ openai/types/beta/threads/runs/__pycache__/file_search_tool_call.cpython-310.pyc,,
591
+ openai/types/beta/threads/runs/__pycache__/file_search_tool_call_delta.cpython-310.pyc,,
592
+ openai/types/beta/threads/runs/__pycache__/function_tool_call.cpython-310.pyc,,
593
+ openai/types/beta/threads/runs/__pycache__/function_tool_call_delta.cpython-310.pyc,,
594
+ openai/types/beta/threads/runs/__pycache__/message_creation_step_details.cpython-310.pyc,,
595
+ openai/types/beta/threads/runs/__pycache__/run_step.cpython-310.pyc,,
596
+ openai/types/beta/threads/runs/__pycache__/run_step_delta.cpython-310.pyc,,
597
+ openai/types/beta/threads/runs/__pycache__/run_step_delta_event.cpython-310.pyc,,
598
+ openai/types/beta/threads/runs/__pycache__/run_step_delta_message_delta.cpython-310.pyc,,
599
+ openai/types/beta/threads/runs/__pycache__/run_step_include.cpython-310.pyc,,
600
+ openai/types/beta/threads/runs/__pycache__/step_list_params.cpython-310.pyc,,
601
+ openai/types/beta/threads/runs/__pycache__/step_retrieve_params.cpython-310.pyc,,
602
+ openai/types/beta/threads/runs/__pycache__/tool_call.cpython-310.pyc,,
603
+ openai/types/beta/threads/runs/__pycache__/tool_call_delta.cpython-310.pyc,,
604
+ openai/types/beta/threads/runs/__pycache__/tool_call_delta_object.cpython-310.pyc,,
605
+ openai/types/beta/threads/runs/__pycache__/tool_calls_step_details.cpython-310.pyc,,
606
+ openai/types/beta/threads/runs/code_interpreter_logs.py,sha256=7wXZpUE9I-oZJ0K3mFG0Nwmfm2bKGiSpWJyBeo7txwo,482
607
+ openai/types/beta/threads/runs/code_interpreter_output_image.py,sha256=8o99k0ZHMHpqH0taXkOkYR9WaDUpCN-G0Ifd5XsJpb8,613
608
+ openai/types/beta/threads/runs/code_interpreter_tool_call.py,sha256=ekiIuH1kVCN51hCzY3AYr5i3_a4vlgUiZHJ59pl17oY,1810
609
+ openai/types/beta/threads/runs/code_interpreter_tool_call_delta.py,sha256=Qr2cen-bKyXTW2NDEUHnmJRE0jY-nkLcnO4NzCbBPDo,1479
610
+ openai/types/beta/threads/runs/file_search_tool_call.py,sha256=XBgsM_USVr3ZrwTZx4L1-YG94Qv8c8GXI19ZHtDrZq8,1897
611
+ openai/types/beta/threads/runs/file_search_tool_call_delta.py,sha256=Gx8c7GSgGYuOvGadcAr3ZIspEFMZS3e2OY7vBo_MYnM,655
612
+ openai/types/beta/threads/runs/function_tool_call.py,sha256=aOq5yOtKOi6C5Q1FIQRxqtJJR1AcSW_K5PvRiKISNCI,920
613
+ openai/types/beta/threads/runs/function_tool_call_delta.py,sha256=VFRtCJkj4PHX97upM1cXpJAk9-JvJSgyngie06fBIjQ,1076
614
+ openai/types/beta/threads/runs/message_creation_step_details.py,sha256=tRFMNF2Rf4DekVliUKkoujItiOjjAE9EG9bbxJvpVPA,506
615
+ openai/types/beta/threads/runs/run_step.py,sha256=L_CiwlW9y7NEOTumv1RyoQrQ_oCaNowRmraUHiAgJEc,3469
616
+ openai/types/beta/threads/runs/run_step_delta.py,sha256=FNYDTddRrTO3PT_fgi7AsJ1PeMtyWsVzcxoihjbBzAw,663
617
+ openai/types/beta/threads/runs/run_step_delta_event.py,sha256=rkDyvHSXt-hc1LngB41f9vglkn6t03kS62bsn0iGaxU,585
618
+ openai/types/beta/threads/runs/run_step_delta_message_delta.py,sha256=UIo6oPH8STLjPHiWL-A4CtKfYe49uptvIAHWNnZ3Ums,564
619
+ openai/types/beta/threads/runs/run_step_include.py,sha256=u-9Cw1hruRiWr70f_hw4XG0w1cwOAYfRJYKva2dEacs,264
620
+ openai/types/beta/threads/runs/step_list_params.py,sha256=zorF5juogCzLMsZLjzMZTs_iIBcPj9WUur5HcrXuH8M,1752
621
+ openai/types/beta/threads/runs/step_retrieve_params.py,sha256=aJ7l8RDJLPyEmqjfO4XsTV54VZOOqyb_gKSUvqp33ZI,815
622
+ openai/types/beta/threads/runs/tool_call.py,sha256=1rwq4IbLgjQAQ-ORXYkNpmJyi9SREDnqA57nJbj_NiU,537
623
+ openai/types/beta/threads/runs/tool_call_delta.py,sha256=t5wF8ndW3z99lHF981FL-IN5xXBS9p7eonH9bxvKu_c,600
624
+ openai/types/beta/threads/runs/tool_call_delta_object.py,sha256=eK20VsIswEyT48XbkGu60HUrE7OD3fhpn1fbXrVauM4,615
625
+ openai/types/beta/threads/runs/tool_calls_step_details.py,sha256=bDa-yybVF3a8H6VqhDGmFZMkpn-0gtPQM2jWWsmUvYo,574
626
+ openai/types/beta/threads/text.py,sha256=9gjmDCqoptnxQ8Jhym87pECyd6m1lB3daCxKNzSFp4Y,319
627
+ openai/types/beta/threads/text_content_block.py,sha256=pdGlKYM1IF9PjTvxjxo1oDg1XeGCFdJdl0kJVpZ7jIs,319
628
+ openai/types/beta/threads/text_content_block_param.py,sha256=feQr0muF845tc1q3FJrzgYOhXeuKLU3x1x5DGFTN2Q0,407
629
+ openai/types/beta/threads/text_delta.py,sha256=2EFeQCkg_cc8nYEJ6BtYAA3_TqgMTbmEXoMvLjzaB34,389
630
+ openai/types/beta/threads/text_delta_block.py,sha256=pkHkVBgNsmHi9JURzs5ayPqxQXSkex3F0jH0MqJXik0,448
631
+ openai/types/beta/vector_store.py,sha256=R8M70uuGWVKt4t0ef__Py-MPw33Ljx4sh5ddihJMbIU,2354
632
+ openai/types/beta/vector_store_create_params.py,sha256=rvvYUSDBbc5L6PAiMGSFQD85ugyR9mLdvZMxjap0fnk,1600
633
+ openai/types/beta/vector_store_deleted.py,sha256=Yq0E1orRLShseLwZ1deiBdDEUgEw_tcYVxGYa5gbIrM,308
634
+ openai/types/beta/vector_store_list_params.py,sha256=KeSeQaEdqO2EiPEVtq1Nun-uRRdkfwW0P8aHeCmL5zA,1226
635
+ openai/types/beta/vector_store_update_params.py,sha256=6OEP1IvilrGoPhHQPXOMQA0TwmCubeo7rB_ik5GQSrY,1115
636
+ openai/types/beta/vector_stores/__init__.py,sha256=gXfm8V5Ad0iueaC_VoHDUQvSdwSfBzk2cQNwZldvY0s,671
637
+ openai/types/beta/vector_stores/__pycache__/__init__.cpython-310.pyc,,
638
+ openai/types/beta/vector_stores/__pycache__/file_batch_create_params.cpython-310.pyc,,
639
+ openai/types/beta/vector_stores/__pycache__/file_batch_list_files_params.cpython-310.pyc,,
640
+ openai/types/beta/vector_stores/__pycache__/file_create_params.cpython-310.pyc,,
641
+ openai/types/beta/vector_stores/__pycache__/file_list_params.cpython-310.pyc,,
642
+ openai/types/beta/vector_stores/__pycache__/vector_store_file.cpython-310.pyc,,
643
+ openai/types/beta/vector_stores/__pycache__/vector_store_file_batch.cpython-310.pyc,,
644
+ openai/types/beta/vector_stores/__pycache__/vector_store_file_deleted.cpython-310.pyc,,
645
+ openai/types/beta/vector_stores/file_batch_create_params.py,sha256=lV4t5kikvEhl431RZgGDyQdFKTl-zXI-Q7YnbM0Qmv8,798
646
+ openai/types/beta/vector_stores/file_batch_list_files_params.py,sha256=FPpQvCQI2skyLB8YCuwdCj7RbO9ba1UjaHAtvrWxAbs,1451
647
+ openai/types/beta/vector_stores/file_create_params.py,sha256=kwSqe-le2UaYrcXGPxlP41QhH2OGvLXBbntAGlmK288,748
648
+ openai/types/beta/vector_stores/file_list_params.py,sha256=AIzmNH1oFuy-qlpRhj9eXu9yyTA-2z_IppLYFclMtZw,1385
649
+ openai/types/beta/vector_stores/vector_store_file.py,sha256=X8aQg4jYlK7iQumxn7B-eammIKVjUbu4lapPeq9jDWo,1788
650
+ openai/types/beta/vector_stores/vector_store_file_batch.py,sha256=ubvj8z95EOdRGAp0rgI94g5uFQx0ob8hLgwOWHKda4E,1457
651
+ openai/types/beta/vector_stores/vector_store_file_deleted.py,sha256=37J7oL2WYCgOd7Rhg2jX6IavaZT63vgUf3u6LC6C3Hs,322
652
+ openai/types/chat/__init__.py,sha256=coi_C98uX9XhThMVJ0GgjPVpzOYOMgj-ZmCWulEE3EA,3849
653
+ openai/types/chat/__pycache__/__init__.cpython-310.pyc,,
654
+ openai/types/chat/__pycache__/chat_completion.cpython-310.pyc,,
655
+ openai/types/chat/__pycache__/chat_completion_assistant_message_param.cpython-310.pyc,,
656
+ openai/types/chat/__pycache__/chat_completion_audio.cpython-310.pyc,,
657
+ openai/types/chat/__pycache__/chat_completion_audio_param.cpython-310.pyc,,
658
+ openai/types/chat/__pycache__/chat_completion_chunk.cpython-310.pyc,,
659
+ openai/types/chat/__pycache__/chat_completion_content_part_image_param.cpython-310.pyc,,
660
+ openai/types/chat/__pycache__/chat_completion_content_part_input_audio_param.cpython-310.pyc,,
661
+ openai/types/chat/__pycache__/chat_completion_content_part_param.cpython-310.pyc,,
662
+ openai/types/chat/__pycache__/chat_completion_content_part_refusal_param.cpython-310.pyc,,
663
+ openai/types/chat/__pycache__/chat_completion_content_part_text_param.cpython-310.pyc,,
664
+ openai/types/chat/__pycache__/chat_completion_developer_message_param.cpython-310.pyc,,
665
+ openai/types/chat/__pycache__/chat_completion_function_call_option_param.cpython-310.pyc,,
666
+ openai/types/chat/__pycache__/chat_completion_function_message_param.cpython-310.pyc,,
667
+ openai/types/chat/__pycache__/chat_completion_message.cpython-310.pyc,,
668
+ openai/types/chat/__pycache__/chat_completion_message_param.cpython-310.pyc,,
669
+ openai/types/chat/__pycache__/chat_completion_message_tool_call.cpython-310.pyc,,
670
+ openai/types/chat/__pycache__/chat_completion_message_tool_call_param.cpython-310.pyc,,
671
+ openai/types/chat/__pycache__/chat_completion_modality.cpython-310.pyc,,
672
+ openai/types/chat/__pycache__/chat_completion_named_tool_choice_param.cpython-310.pyc,,
673
+ openai/types/chat/__pycache__/chat_completion_prediction_content_param.cpython-310.pyc,,
674
+ openai/types/chat/__pycache__/chat_completion_reasoning_effort.cpython-310.pyc,,
675
+ openai/types/chat/__pycache__/chat_completion_role.cpython-310.pyc,,
676
+ openai/types/chat/__pycache__/chat_completion_stream_options_param.cpython-310.pyc,,
677
+ openai/types/chat/__pycache__/chat_completion_system_message_param.cpython-310.pyc,,
678
+ openai/types/chat/__pycache__/chat_completion_token_logprob.cpython-310.pyc,,
679
+ openai/types/chat/__pycache__/chat_completion_tool_choice_option_param.cpython-310.pyc,,
680
+ openai/types/chat/__pycache__/chat_completion_tool_message_param.cpython-310.pyc,,
681
+ openai/types/chat/__pycache__/chat_completion_tool_param.cpython-310.pyc,,
682
+ openai/types/chat/__pycache__/chat_completion_user_message_param.cpython-310.pyc,,
683
+ openai/types/chat/__pycache__/completion_create_params.cpython-310.pyc,,
684
+ openai/types/chat/__pycache__/parsed_chat_completion.cpython-310.pyc,,
685
+ openai/types/chat/__pycache__/parsed_function_tool_call.cpython-310.pyc,,
686
+ openai/types/chat/chat_completion.py,sha256=MaTVOMwtbzqGyHgyP4DP41ESEDKhv_XOM8L_fx3uoQE,2689
687
+ openai/types/chat/chat_completion_assistant_message_param.py,sha256=E6ZrsjEN_JHOHO-wC7Uk90Fa7Qz7bfgx8jea0z6g30s,2421
688
+ openai/types/chat/chat_completion_audio.py,sha256=vzWeaAAAbomkvbFksXQu6qpw1RVJiuFytJZswO6h6vI,656
689
+ openai/types/chat/chat_completion_audio_param.py,sha256=MnY4PNK8-OOaODkHNhBbSbzH4HmqykKvwftsOjVpOAE,801
690
+ openai/types/chat/chat_completion_chunk.py,sha256=aQXFY4gq9YEIrr7YBM68D5XyWGT9kKo0JO8n-55IjEA,5032
691
+ openai/types/chat/chat_completion_content_part_image_param.py,sha256=Gqv98qyD8jB81THZp49c8v2tHrId_iQp4NzciT9SKI0,797
692
+ openai/types/chat/chat_completion_content_part_input_audio_param.py,sha256=r1EXNEtjJo5oJ9AnP3omaJzACE1gSfdmob5Q0HKsOm4,704
693
+ openai/types/chat/chat_completion_content_part_param.py,sha256=7lCk-fZB5iT5keHLWw9eM-Hd5jsnPh2IIHICIUpoEXk,686
694
+ openai/types/chat/chat_completion_content_part_refusal_param.py,sha256=TV1vu-IgrvKa5IBlPSIdBxUaW8g1zDhMOOBOEmhU2w0,467
695
+ openai/types/chat/chat_completion_content_part_text_param.py,sha256=4IpiXMKM9AuTyop5PRptPBbBhh9s93xy2vjg4Yw6NIw,429
696
+ openai/types/chat/chat_completion_developer_message_param.py,sha256=OCFKdTWkff94VtgY7AaDUUFiZLT8LBn7WWxjbcIq2OM,830
697
+ openai/types/chat/chat_completion_function_call_option_param.py,sha256=M-IqWHyBLkvYBcwFxxp4ydCIxbPDaMlNl4bik9UoFd4,365
698
+ openai/types/chat/chat_completion_function_message_param.py,sha256=jIaZbBHHbt4v4xHCIyvYtYLst_X4jOznRjYNcTf0MF0,591
699
+ openai/types/chat/chat_completion_message.py,sha256=AH7JpjgKfphxBRJyI4PhwHCMREy_-D-a4_4u4NHjSfc,1674
700
+ openai/types/chat/chat_completion_message_param.py,sha256=aLrz_cX_CYymFdW9cMIPZpv0Z4zM50RECV3SH6QNZsc,1019
701
+ openai/types/chat/chat_completion_message_tool_call.py,sha256=XlIe2vhSYvrt8o8Yol5AQqnacI1xHqpEIV26G4oNrZY,900
702
+ openai/types/chat/chat_completion_message_tool_call_param.py,sha256=XNhuUpGr5qwVTo0K8YavJwleHYSdwN_urK51eKlqC24,1009
703
+ openai/types/chat/chat_completion_modality.py,sha256=8Ga0kruwJc43WD2OIqNudn7KrVRTPDQaalVkh_8bp9I,236
704
+ openai/types/chat/chat_completion_named_tool_choice_param.py,sha256=JsxfSJYpOmF7zIreQ0JrXRSLp07OGCBSycRRcF6OZmg,569
705
+ openai/types/chat/chat_completion_prediction_content_param.py,sha256=Xw4K_4F379LsXENOpZvREDn55cCnbmZ69xa4fw9w3bg,868
706
+ openai/types/chat/chat_completion_reasoning_effort.py,sha256=Bs4xRaukXpM-_NW-QSKKnUyIPDw1ffSqnWaHU-rMdIE,258
707
+ openai/types/chat/chat_completion_role.py,sha256=Rdzg4deI1uZmqgkwnMrLHvbV2fPRqKcHLQrVmKVk9Dw,262
708
+ openai/types/chat/chat_completion_stream_options_param.py,sha256=7-R2mYh7dbtX9qDOL3UkeyVH6FNWC_4aTCLtHYObMbs,628
709
+ openai/types/chat/chat_completion_system_message_param.py,sha256=WYtzmsNP8ZI3Ie8cd-oU7RuNoaBF6-bBR3mOzST9hMw,815
710
+ openai/types/chat/chat_completion_token_logprob.py,sha256=6-ipUFfsXMf5L7FDFi127NaVkDtmEooVgGBF6Ts965A,1769
711
+ openai/types/chat/chat_completion_tool_choice_option_param.py,sha256=ef71WSM9HMQhIQUocRgVJUVW-bSRwK2_1NjFSB5TPiI,472
712
+ openai/types/chat/chat_completion_tool_message_param.py,sha256=5K7jfKpwTuKNi1PTFabq_LHH-7wun8CUsLDh90U8zQE,730
713
+ openai/types/chat/chat_completion_tool_param.py,sha256=J9r2TAWygkIBDInWEKx29gBE0wiCgc7HpXFyQhxSkAU,503
714
+ openai/types/chat/chat_completion_user_message_param.py,sha256=mik-MRkwb543C5FSJ52LtTkeA2E_HdLUgtoHEdO73XQ,792
715
+ openai/types/chat/completion_create_params.py,sha256=CGwTjckVhpxaQfA9zRKmrMCHvnYk-eaPFVmSVoA5Nls,13926
716
+ openai/types/chat/parsed_chat_completion.py,sha256=KwcwCtj0yexl6gB7yuOnyETRW-uUvNRYbVzPMkwCe5Q,1437
717
+ openai/types/chat/parsed_function_tool_call.py,sha256=hJzcKOpzf1tnXC6RGbPhaeCawq8EFdnLK_MfRITkW1U,920
718
+ openai/types/chat_model.py,sha256=k9Ic_l5usRyY6xSHnqe4dBMKM5R4klTGuANg6z88WFk,1107
719
+ openai/types/completion.py,sha256=yuYVEVkJcMVUINNLglkxOJqCx097HKCYFeJun3Js73A,1172
720
+ openai/types/completion_choice.py,sha256=PUk77T3Cp34UJSXoMfSzTKGWDK0rQQwq84X_PSlOUJo,965
721
+ openai/types/completion_create_params.py,sha256=TWNRWlGAcvirzY3Piy6AeYKyNxG7ktmtwjS27Q4bTi8,7535
722
+ openai/types/completion_usage.py,sha256=uf5n0vzlCkGAU67BBn_h7yhjd_G4OHpQbJnvzz0eO2A,1735
723
+ openai/types/create_embedding_response.py,sha256=lTAu_Pym76kFljDnnDRoDB2GNQSzWmwwlqf5ff7FNPM,798
724
+ openai/types/embedding.py,sha256=2pV6RTSf5UV6E86Xeud5ZwmjQjMS93m_4LrQ0GN3fho,637
725
+ openai/types/embedding_create_params.py,sha256=C9Tm1C_m96QtjyNc8fiy6wzs9HkM2GUF8CSTSS6V7ks,1850
726
+ openai/types/embedding_model.py,sha256=0dDL87len4vZ4DR6eCp7JZJCJpgwWphRmJhMK3Se8f4,281
727
+ openai/types/file_content.py,sha256=qLlM4J8kgu1BfrtlmYftPsQVCJu4VqYeiS1T28u8EQ8,184
728
+ openai/types/file_create_params.py,sha256=N1I3rER1se27usx46fhkvdtn-blJ6Y9ECT7Wwzve37Q,913
729
+ openai/types/file_deleted.py,sha256=H_r9U7XthT5xHAo_4ay1EGGkc21eURt8MkkIBRYiQcw,277
730
+ openai/types/file_list_params.py,sha256=TmmqvM7droAJ49YlgpeFzrhPv5uVkSZDxqlG6hhumPo,960
731
+ openai/types/file_object.py,sha256=ESuRYCTLbDtHxyuhzybKTF_TztIcq_F7TzCTQ6JToE0,1309
732
+ openai/types/file_purpose.py,sha256=o1TzR-41XsNsQ0791GTGPe3DLkU9FEODucKdP6Q6sPc,243
733
+ openai/types/fine_tuning/__init__.py,sha256=SZvjq_22oY9E4zcnrvVd0ul9U4sk_IBeOd0MsNALu5s,806
734
+ openai/types/fine_tuning/__pycache__/__init__.cpython-310.pyc,,
735
+ openai/types/fine_tuning/__pycache__/fine_tuning_job.cpython-310.pyc,,
736
+ openai/types/fine_tuning/__pycache__/fine_tuning_job_event.cpython-310.pyc,,
737
+ openai/types/fine_tuning/__pycache__/fine_tuning_job_integration.cpython-310.pyc,,
738
+ openai/types/fine_tuning/__pycache__/fine_tuning_job_wandb_integration.cpython-310.pyc,,
739
+ openai/types/fine_tuning/__pycache__/fine_tuning_job_wandb_integration_object.cpython-310.pyc,,
740
+ openai/types/fine_tuning/__pycache__/job_create_params.cpython-310.pyc,,
741
+ openai/types/fine_tuning/__pycache__/job_list_events_params.cpython-310.pyc,,
742
+ openai/types/fine_tuning/__pycache__/job_list_params.cpython-310.pyc,,
743
+ openai/types/fine_tuning/fine_tuning_job.py,sha256=bu-afb1RZqgNmpUQ7MoXymTjFs3i5JSsBLMV4TKHhi8,6473
744
+ openai/types/fine_tuning/fine_tuning_job_event.py,sha256=POxSD7-WxAtJV2KuEpA9EmZi7W_u0PikOUtUzxIXii4,854
745
+ openai/types/fine_tuning/fine_tuning_job_integration.py,sha256=c3Uy7RMVJ32Xlat-6s9eG-5vZLl4w66COXc0B3pWk4g,242
746
+ openai/types/fine_tuning/fine_tuning_job_wandb_integration.py,sha256=YnBeiz14UuhUSpnD0KBj5V143qLvJbDIMcUVWOCBLXY,1026
747
+ openai/types/fine_tuning/fine_tuning_job_wandb_integration_object.py,sha256=7vEc2uEV2c_DENBjhq0Qy5X8B-rzxsKvGECjnvF1Wdw,804
748
+ openai/types/fine_tuning/job_create_params.py,sha256=TwQlyQrZfxrgqD7nmJDWE8pwklsdUUmkYaitvB7LY34,7222
749
+ openai/types/fine_tuning/job_list_events_params.py,sha256=4xOED4H2ky2mI9sIDytjmfJz5bNAdNWb70WIb_0bBWs,400
750
+ openai/types/fine_tuning/job_list_params.py,sha256=yjxaEnESVTRpJ9ItvjKq30KcD_xz_trqKMIxG2eAriE,396
751
+ openai/types/fine_tuning/jobs/__init__.py,sha256=nuWhOUsmsoVKTKMU35kknmr8sfpTF-kkIzyuOlRbJj0,295
752
+ openai/types/fine_tuning/jobs/__pycache__/__init__.cpython-310.pyc,,
753
+ openai/types/fine_tuning/jobs/__pycache__/checkpoint_list_params.cpython-310.pyc,,
754
+ openai/types/fine_tuning/jobs/__pycache__/fine_tuning_job_checkpoint.cpython-310.pyc,,
755
+ openai/types/fine_tuning/jobs/checkpoint_list_params.py,sha256=XoDLkkKCWmf5an5rnoVEpNK8mtQHq1fHw9EqmezfrXM,415
756
+ openai/types/fine_tuning/jobs/fine_tuning_job_checkpoint.py,sha256=Z_sUhebJY9nWSssZU7QoOJwe5sez76sCAuVeSO63XhY,1347
757
+ openai/types/image.py,sha256=9No-8GHesOUbjchemY1jqtMwh_s22oBmLVFlLn2KoQo,607
758
+ openai/types/image_create_variation_params.py,sha256=PvvPvHXvz0etrRrzVIyvRjvDvNbjGspPu85hOq2fLII,1477
759
+ openai/types/image_edit_params.py,sha256=cxpBybs5peY0DJMTWHgoIx3dWIXj0Y0YmvgxrjGmWjo,1837
760
+ openai/types/image_generate_params.py,sha256=bD2AEIetbt37YDp65vEFfGxkLndOFCwhzJol1I63wfA,2132
761
+ openai/types/image_model.py,sha256=W4YchkhJT2wZdlNDUpVkEKg8zdDDfp9S3oTf4D8Wr8g,219
762
+ openai/types/images_response.py,sha256=EJ4qxYZ8CPGh2SZdRsyw6I0FnUvlgwxwc4NgPovJrvk,274
763
+ openai/types/model.py,sha256=DMw8KwQx8B6S6sAI038D0xdzkmYdY5-r0oMhCUG4l6w,532
764
+ openai/types/model_deleted.py,sha256=tXZybg03DunoOSYvwhT7zKj7KTN42R0VEs_-3PRliMo,229
765
+ openai/types/moderation.py,sha256=6CZmxhZiafnT50gKa7BeybrTSoYfCAk7wvD5CQHvBP0,6789
766
+ openai/types/moderation_create_params.py,sha256=EaZ2cej25g5WbRB2kIY7JFCXQPKSQQ95iyoUAAelGr4,992
767
+ openai/types/moderation_create_response.py,sha256=e6SVfWX2_JX25Za0C6KojcnbMTtDB2A7cjUm6cFMKcs,484
768
+ openai/types/moderation_image_url_input_param.py,sha256=t1r9WD3c-CK2Al1lpB4-DjfzLFSwgETR0g8nsRdoL0Y,622
769
+ openai/types/moderation_model.py,sha256=BFeqSyel2My2WKC6MCa_mAIHJx4uXU3-p8UNudJANeM,319
770
+ openai/types/moderation_multi_modal_input_param.py,sha256=RFdiEPsakWIscutX896ir5_rnEA2TLX5xQkjO5QR2vs,483
771
+ openai/types/moderation_text_input_param.py,sha256=ardCbBcdaULf8bkFuzkSKukV9enrINSjNWvb7m0LjZg,406
772
+ openai/types/shared/__init__.py,sha256=34RJ2IUXj0f3B73a6rqeHILu8AH5-sC8npTbEx_bnk8,551
773
+ openai/types/shared/__pycache__/__init__.cpython-310.pyc,,
774
+ openai/types/shared/__pycache__/error_object.cpython-310.pyc,,
775
+ openai/types/shared/__pycache__/function_definition.cpython-310.pyc,,
776
+ openai/types/shared/__pycache__/function_parameters.cpython-310.pyc,,
777
+ openai/types/shared/__pycache__/response_format_json_object.cpython-310.pyc,,
778
+ openai/types/shared/__pycache__/response_format_json_schema.cpython-310.pyc,,
779
+ openai/types/shared/__pycache__/response_format_text.cpython-310.pyc,,
780
+ openai/types/shared/error_object.py,sha256=G7SGPZ9Qw3gewTKbi3fK69eM6L2Ur0C2D57N8iEapJA,305
781
+ openai/types/shared/function_definition.py,sha256=8a5uHoIKrkrwTgfwTyE9ly4PgsZ3iLA_yRUAjubTb7Y,1447
782
+ openai/types/shared/function_parameters.py,sha256=Dkc_pm98zCKyouQmYrl934cK8ZWX7heY_IIyunW8x7c,236
783
+ openai/types/shared/response_format_json_object.py,sha256=15KTCXJ0o1W4c5V1vAcOQAx-u0eoIfAjxrHLoN3NuE4,344
784
+ openai/types/shared/response_format_json_schema.py,sha256=rZS7diOPeqK48O_R6OYMJ6AtSGy_88PKTxzha6_56Fo,1399
785
+ openai/types/shared/response_format_text.py,sha256=GX0u_40OLmDdSyawDrUcUk4jcrz1qWsKmmAMP4AD7hc,318
786
+ openai/types/shared_params/__init__.py,sha256=GcNBmK_EPlGE-xPFmSQjlOq7SuNYd2nwDswX4ExHwoU,498
787
+ openai/types/shared_params/__pycache__/__init__.cpython-310.pyc,,
788
+ openai/types/shared_params/__pycache__/function_definition.cpython-310.pyc,,
789
+ openai/types/shared_params/__pycache__/function_parameters.cpython-310.pyc,,
790
+ openai/types/shared_params/__pycache__/response_format_json_object.cpython-310.pyc,,
791
+ openai/types/shared_params/__pycache__/response_format_json_schema.cpython-310.pyc,,
792
+ openai/types/shared_params/__pycache__/response_format_text.cpython-310.pyc,,
793
+ openai/types/shared_params/function_definition.py,sha256=ciMXqn1tFXnp1tg9weJW0uvtyvMLrnph3WXMg4IG1Vk,1482
794
+ openai/types/shared_params/function_parameters.py,sha256=UvxKz_3b9b5ECwWr8RFrIH511htbU2JZsp9Z9BMkF-o,272
795
+ openai/types/shared_params/response_format_json_object.py,sha256=QT4uJCK7RzN3HK17eGjEo36jLKOIBBNGjiX-zIa9iT4,390
796
+ openai/types/shared_params/response_format_json_schema.py,sha256=Uu2ioeSbI64bm-jJ61OY8Lr3PpofTR4d2LNBcaYxlec,1360
797
+ openai/types/shared_params/response_format_text.py,sha256=SjHeZAfgM1-HXAoKLrkiH-VZEnQ73XPTk_RgtJmEbU4,364
798
+ openai/types/upload.py,sha256=mEeQTGS0uqFkxbDpJzgBUvuDhGVPw9cQxhRJjPBVeLo,1186
799
+ openai/types/upload_complete_params.py,sha256=7On-iVAlA9p_nksLSFPBPR4QbB0xEtAW-skyh7S9gR0,504
800
+ openai/types/upload_create_params.py,sha256=ZiZr1yC6g2VqL7KEnw7lhE4kZvU-F3DfTAc2TPk-XBo,889
801
+ openai/types/uploads/__init__.py,sha256=fDsmd3L0nIWbFldbViOLvcQavsFA4SL3jsXDfAueAck,242
802
+ openai/types/uploads/__pycache__/__init__.cpython-310.pyc,,
803
+ openai/types/uploads/__pycache__/part_create_params.cpython-310.pyc,,
804
+ openai/types/uploads/__pycache__/upload_part.cpython-310.pyc,,
805
+ openai/types/uploads/part_create_params.py,sha256=pBByUzngaj70ov1knoSo_gpeBjaWP9D5EdiHwiG4G7U,362
806
+ openai/types/uploads/upload_part.py,sha256=U9953cr9lJJLWEfhTiwHphRzLKARq3gWAWqrjxbhTR4,590
807
+ openai/types/websocket_connection_options.py,sha256=4cAWpv1KKp_9pvnez7pGYzO3s8zh1WvX2xpBhpe-96k,1840
808
+ openai/version.py,sha256=cjbXKO8Ut3aiv4YlQnugff7AdC48MpSndcx96q88Yb8,62
evalkit_eagle/lib/python3.10/site-packages/openai-1.59.7.dist-info/REQUESTED ADDED
File without changes
evalkit_eagle/lib/python3.10/site-packages/openai-1.59.7.dist-info/entry_points.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ [console_scripts]
2
+ openai = openai.cli:main
evalkit_eagle/lib/python3.10/site-packages/openai-1.59.7.dist-info/licenses/LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright 2025 OpenAI
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
evalkit_eagle/lib/python3.10/site-packages/regex/_regex.cpython-310-x86_64-linux-gnu.so ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29dd755c12af2aebb0e796b4bdd3878ddecf511ecd096f2a0e534c6ad6860f2c
3
+ size 2549016
evalkit_eagle/lib/python3.10/site-packages/sacrebleu/__init__.py ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+
4
+ # Copyright 2017--2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License"). You may not
7
+ # use this file except in compliance with the License. A copy of the License
8
+ # is located at
9
+ #
10
+ # http://aws.amazon.com/apache2.0/
11
+ #
12
+ # or in the "license" file accompanying this file. This file is distributed on
13
+ # an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
14
+ # express or implied. See the License for the specific language governing
15
+ # permissions and limitations under the License.
16
+
17
+ __description__ = "Hassle-free computation of shareable, comparable, and reproducible BLEU, chrF, and TER scores"
18
+
19
+
20
+ # Backward compatibility functions for old style API access (<= 1.4.10)
21
+ from .compat import (
22
+ corpus_bleu,
23
+ corpus_chrf,
24
+ corpus_ter,
25
+ raw_corpus_bleu,
26
+ sentence_bleu,
27
+ sentence_chrf,
28
+ sentence_ter,
29
+ )
30
+ from .dataset import DATASETS
31
+ from .metrics import BLEU, CHRF, TER
32
+ from .metrics.helpers import extract_char_ngrams, extract_word_ngrams
33
+ from .utils import (
34
+ SACREBLEU_DIR,
35
+ download_test_set,
36
+ get_available_testsets,
37
+ get_langpairs_for_testset,
38
+ get_reference_files,
39
+ get_source_file,
40
+ smart_open,
41
+ )
42
+ from .version import __version__
43
+
44
+ __all__ = [
45
+ "smart_open",
46
+ "SACREBLEU_DIR",
47
+ "download_test_set",
48
+ "get_source_file",
49
+ "get_reference_files",
50
+ "get_available_testsets",
51
+ "get_langpairs_for_testset",
52
+ "extract_word_ngrams",
53
+ "extract_char_ngrams",
54
+ "DATASETS",
55
+ "BLEU",
56
+ "CHRF",
57
+ "TER",
58
+ "corpus_bleu",
59
+ "raw_corpus_bleu",
60
+ "sentence_bleu",
61
+ "corpus_chrf",
62
+ "sentence_chrf",
63
+ "corpus_ter",
64
+ "sentence_ter",
65
+ "__version__",
66
+ ]
evalkit_eagle/lib/python3.10/site-packages/sacrebleu/__main__.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+
4
+ # Copyright 2017--2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License"). You may not
7
+ # use this file except in compliance with the License. A copy of the License
8
+ # is located at
9
+ #
10
+ # http://aws.amazon.com/apache2.0/
11
+ #
12
+ # or in the "license" file accompanying this file. This file is distributed on
13
+ # an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
14
+ # express or implied. See the License for the specific language governing
15
+ # permissions and limitations under the License.
16
+
17
+ """
18
+ SacreBLEU provides hassle-free computation of shareable, comparable, and reproducible BLEU scores.
19
+ Inspired by Rico Sennrich's `multi-bleu-detok.perl`, it produces the official WMT scores but works with plain text.
20
+ It also knows all the standard test sets and handles downloading, processing, and tokenization for you.
21
+
22
+ See the [README.md] file for more information.
23
+ """
24
+ from .sacrebleu import main
25
+
26
+ if __name__ == '__main__':
27
+ main()
evalkit_eagle/lib/python3.10/site-packages/sacrebleu/compat.py ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Sequence, Optional
2
+
3
+ from .metrics import BLEU, CHRF, TER, BLEUScore, CHRFScore, TERScore
4
+
5
+
6
+ ######################################################################
7
+ # Backward compatibility functions for old style API access (< 1.4.11)
8
+ ######################################################################
9
+ def corpus_bleu(hypotheses: Sequence[str],
10
+ references: Sequence[Sequence[str]],
11
+ smooth_method='exp',
12
+ smooth_value=None,
13
+ force=False,
14
+ lowercase=False,
15
+ tokenize=BLEU.TOKENIZER_DEFAULT,
16
+ use_effective_order=False) -> BLEUScore:
17
+ """Computes BLEU for a corpus against a single (or multiple) reference(s).
18
+ This is the main CLI entry point for computing BLEU between a system output
19
+ and a reference sentence.
20
+
21
+ :param hypotheses: A sequence of hypothesis strings.
22
+ :param references: A sequence of reference documents with document being
23
+ defined as a sequence of reference strings.
24
+ :param smooth_method: The smoothing method to use ('floor', 'add-k', 'exp' or 'none')
25
+ :param smooth_value: The smoothing value for `floor` and `add-k` methods. `None` falls back to default value.
26
+ :param force: Ignore data that looks already tokenized
27
+ :param lowercase: Lowercase the data
28
+ :param tokenize: The tokenizer to use
29
+ :param use_effective_order: Don't take into account n-gram orders without any match.
30
+ :return: a `BLEUScore` object
31
+ """
32
+ metric = BLEU(
33
+ lowercase=lowercase, force=force, tokenize=tokenize,
34
+ smooth_method=smooth_method, smooth_value=smooth_value,
35
+ effective_order=use_effective_order)
36
+
37
+ return metric.corpus_score(hypotheses, references)
38
+
39
+
40
+ def raw_corpus_bleu(hypotheses: Sequence[str],
41
+ references: Sequence[Sequence[str]],
42
+ smooth_value: Optional[float] = BLEU.SMOOTH_DEFAULTS['floor']) -> BLEUScore:
43
+ """Computes BLEU for a corpus against a single (or multiple) reference(s).
44
+ This convenience function assumes a particular set of arguments i.e.
45
+ it disables tokenization and applies a `floor` smoothing with value `0.1`.
46
+
47
+ This convenience call does not apply any tokenization at all,
48
+ neither to the system output nor the reference. It just computes
49
+ BLEU on the "raw corpus" (hence the name).
50
+
51
+ :param hypotheses: A sequence of hypothesis strings.
52
+ :param references: A sequence of reference documents with document being
53
+ defined as a sequence of reference strings.
54
+ :param smooth_value: The smoothing value for `floor`. If not given, the default of 0.1 is used.
55
+ :return: Returns a `BLEUScore` object.
56
+
57
+ """
58
+ return corpus_bleu(
59
+ hypotheses, references, smooth_method='floor',
60
+ smooth_value=smooth_value, force=True, tokenize='none',
61
+ use_effective_order=True)
62
+
63
+
64
+ def sentence_bleu(hypothesis: str,
65
+ references: Sequence[str],
66
+ smooth_method: str = 'exp',
67
+ smooth_value: Optional[float] = None,
68
+ lowercase: bool = False,
69
+ tokenize=BLEU.TOKENIZER_DEFAULT,
70
+ use_effective_order: bool = True) -> BLEUScore:
71
+ """
72
+ Computes BLEU for a single sentence against a single (or multiple) reference(s).
73
+
74
+ Disclaimer: Computing BLEU at the sentence level is not its intended use as
75
+ BLEU is a corpus-level metric.
76
+
77
+ :param hypothesis: A single hypothesis string.
78
+ :param references: A sequence of reference strings.
79
+ :param smooth_method: The smoothing method to use ('floor', 'add-k', 'exp' or 'none')
80
+ :param smooth_value: The smoothing value for `floor` and `add-k` methods. `None` falls back to default value.
81
+ :param lowercase: Lowercase the data
82
+ :param tokenize: The tokenizer to use
83
+ :param use_effective_order: Don't take into account n-gram orders without any match.
84
+ :return: Returns a `BLEUScore` object.
85
+ """
86
+ metric = BLEU(
87
+ lowercase=lowercase, tokenize=tokenize, force=False,
88
+ smooth_method=smooth_method, smooth_value=smooth_value,
89
+ effective_order=use_effective_order)
90
+
91
+ return metric.sentence_score(hypothesis, references)
92
+
93
+
94
+ def corpus_chrf(hypotheses: Sequence[str],
95
+ references: Sequence[Sequence[str]],
96
+ char_order: int = CHRF.CHAR_ORDER,
97
+ word_order: int = CHRF.WORD_ORDER,
98
+ beta: int = CHRF.BETA,
99
+ remove_whitespace: bool = True,
100
+ eps_smoothing: bool = False) -> CHRFScore:
101
+ """
102
+ Computes chrF for a corpus against a single (or multiple) reference(s).
103
+ If `word_order` equals to 2, the metric is referred to as chrF++.
104
+
105
+ :param hypotheses: A sequence of hypothesis strings.
106
+ :param references: A sequence of reference documents with document being
107
+ defined as a sequence of reference strings.
108
+ :param char_order: Character n-gram order.
109
+ :param word_order: Word n-gram order. If equals to 2, the metric is referred to as chrF++.
110
+ :param beta: Determine the importance of recall w.r.t precision.
111
+ :param eps_smoothing: If `True`, applies epsilon smoothing similar
112
+ to reference chrF++.py, NLTK and Moses implementations. Otherwise,
113
+ it takes into account effective match order similar to sacreBLEU < 2.0.0.
114
+ :param remove_whitespace: If `True`, removes whitespaces prior to character n-gram extraction.
115
+ :return: A `CHRFScore` object.
116
+ """
117
+ metric = CHRF(
118
+ char_order=char_order,
119
+ word_order=word_order,
120
+ beta=beta,
121
+ whitespace=not remove_whitespace,
122
+ eps_smoothing=eps_smoothing)
123
+ return metric.corpus_score(hypotheses, references)
124
+
125
+
126
+ def sentence_chrf(hypothesis: str,
127
+ references: Sequence[str],
128
+ char_order: int = CHRF.CHAR_ORDER,
129
+ word_order: int = CHRF.WORD_ORDER,
130
+ beta: int = CHRF.BETA,
131
+ remove_whitespace: bool = True,
132
+ eps_smoothing: bool = False) -> CHRFScore:
133
+ """
134
+ Computes chrF for a single sentence against a single (or multiple) reference(s).
135
+ If `word_order` equals to 2, the metric is referred to as chrF++.
136
+
137
+ :param hypothesis: A single hypothesis string.
138
+ :param references: A sequence of reference strings.
139
+ :param char_order: Character n-gram order.
140
+ :param word_order: Word n-gram order. If equals to 2, the metric is referred to as chrF++.
141
+ :param beta: Determine the importance of recall w.r.t precision.
142
+ :param eps_smoothing: If `True`, applies epsilon smoothing similar
143
+ to reference chrF++.py, NLTK and Moses implementations. Otherwise,
144
+ it takes into account effective match order similar to sacreBLEU < 2.0.0.
145
+ :param remove_whitespace: If `True`, removes whitespaces prior to character n-gram extraction.
146
+ :return: A `CHRFScore` object.
147
+ """
148
+ metric = CHRF(
149
+ char_order=char_order,
150
+ word_order=word_order,
151
+ beta=beta,
152
+ whitespace=not remove_whitespace,
153
+ eps_smoothing=eps_smoothing)
154
+ return metric.sentence_score(hypothesis, references)
155
+
156
+
157
+ def corpus_ter(hypotheses: Sequence[str],
158
+ references: Sequence[Sequence[str]],
159
+ normalized: bool = False,
160
+ no_punct: bool = False,
161
+ asian_support: bool = False,
162
+ case_sensitive: bool = False) -> TERScore:
163
+ """
164
+ Computes TER for a corpus against a single (or multiple) reference(s).
165
+
166
+ :param hypotheses: A sequence of hypothesis strings.
167
+ :param references: A sequence of reference documents with document being
168
+ defined as a sequence of reference strings.
169
+ :param normalized: Enable character normalization.
170
+ :param no_punct: Remove punctuation.
171
+ :param asian_support: Enable special treatment of Asian characters.
172
+ :param case_sensitive: Enables case-sensitivity.
173
+ :return: A `TERScore` object.
174
+ """
175
+ metric = TER(
176
+ normalized=normalized,
177
+ no_punct=no_punct,
178
+ asian_support=asian_support,
179
+ case_sensitive=case_sensitive)
180
+ return metric.corpus_score(hypotheses, references)
181
+
182
+
183
+ def sentence_ter(hypothesis: str,
184
+ references: Sequence[str],
185
+ normalized: bool = False,
186
+ no_punct: bool = False,
187
+ asian_support: bool = False,
188
+ case_sensitive: bool = False) -> TERScore:
189
+ """
190
+ Computes TER for a single hypothesis against a single (or multiple) reference(s).
191
+
192
+ :param hypothesis: A single hypothesis string.
193
+ :param references: A sequence of reference strings.
194
+ :param normalized: Enable character normalization.
195
+ :param no_punct: Remove punctuation.
196
+ :param asian_support: Enable special treatment of Asian characters.
197
+ :param case_sensitive: Enable case-sensitivity.
198
+ :return: A `TERScore` object.
199
+ """
200
+ metric = TER(
201
+ normalized=normalized,
202
+ no_punct=no_punct,
203
+ asian_support=asian_support,
204
+ case_sensitive=case_sensitive)
205
+ return metric.sentence_score(hypothesis, references)
evalkit_eagle/lib/python3.10/site-packages/sacrebleu/sacrebleu.py ADDED
@@ -0,0 +1,576 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+
3
+ # Copyright 2017--2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License"). You may not
6
+ # use this file except in compliance with the License. A copy of the License
7
+ # is located at
8
+ #
9
+ # http://aws.amazon.com/apache2.0/
10
+ #
11
+ # or in the "license" file accompanying this file. This file is distributed on
12
+ # an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
13
+ # express or implied. See the License for the specific language governing
14
+ # permissions and limitations under the License.
15
+
16
+ """
17
+ SacreBLEU provides hassle-free computation of shareable, comparable, and reproducible BLEU scores.
18
+ Inspired by Rico Sennrich's `multi-bleu-detok.perl`, it produces the official WMT scores but works with plain text.
19
+ It also knows all the standard test sets and handles downloading, processing, and tokenization for you.
20
+
21
+ See the [README.md] file for more information.
22
+ """
23
+
24
+ import io
25
+ import os
26
+ import sys
27
+ import logging
28
+ import pathlib
29
+ import argparse
30
+ from collections import defaultdict
31
+
32
+
33
+ # Allows calling the script as a standalone utility
34
+ # See: https://github.com/mjpost/sacrebleu/issues/86
35
+ if __package__ is None and __name__ == '__main__':
36
+ parent = pathlib.Path(__file__).absolute().parents[1]
37
+ sys.path.insert(0, str(parent))
38
+ __package__ = 'sacrebleu'
39
+
40
+ from .dataset import DATASETS
41
+ from .metrics import METRICS
42
+ from .utils import smart_open, filter_subset, get_langpairs_for_testset, get_available_testsets
43
+ from .utils import print_test_set, print_subset_results, get_reference_files, download_test_set
44
+ from .utils import args_to_dict, sanity_check_lengths, print_results_table, print_single_results
45
+ from .utils import get_available_testsets_for_langpair, Color
46
+
47
+ from . import __version__ as VERSION
48
+
49
+ sacrelogger = logging.getLogger('sacrebleu')
50
+
51
+ try:
52
+ # SIGPIPE is not available on Windows machines, throwing an exception.
53
+ from signal import SIGPIPE # type: ignore
54
+
55
+ # If SIGPIPE is available, change behaviour to default instead of ignore.
56
+ from signal import signal, SIG_DFL
57
+ signal(SIGPIPE, SIG_DFL)
58
+ except ImportError:
59
+ pass
60
+
61
+
62
+ def parse_args():
63
+ arg_parser = argparse.ArgumentParser(
64
+ description='sacreBLEU: Hassle-free computation of shareable BLEU scores.\n'
65
+ 'Quick usage: score your detokenized output against WMT\'14 EN-DE:\n'
66
+ ' cat output.detok.de | sacrebleu -t wmt14 -l en-de',
67
+ formatter_class=argparse.RawDescriptionHelpFormatter)
68
+
69
+ arg_parser.add_argument('--citation', '--cite', default=False, action='store_true',
70
+ help='Dump the bibtex citation and quit.')
71
+ arg_parser.add_argument('--list', default=False, action='store_true',
72
+ help='Print a list of all available test sets.')
73
+ arg_parser.add_argument('--test-set', '-t', type=str, default=None,
74
+ help='The test set to use (see also --list) or a comma-separated list of test sets to be concatenated.')
75
+ arg_parser.add_argument('--language-pair', '-l', dest='langpair', default=None,
76
+ help='Source-target language pair (2-char ISO639-1 codes).')
77
+ arg_parser.add_argument('--origlang', '-ol', dest='origlang', default=None,
78
+ help='Use a subset of sentences with a given original language (2-char ISO639-1 codes), "non-" prefix means negation.')
79
+ arg_parser.add_argument('--subset', dest='subset', default=None,
80
+ help='Use a subset of sentences whose document annotation matches a given regex (see SUBSETS in the source code).')
81
+ arg_parser.add_argument('--download', type=str, default=None,
82
+ help='Download a test set and quit.')
83
+ arg_parser.add_argument('--echo', nargs="+", type=str, default=None,
84
+ help='Output the source (src), reference (ref), or other available field (docid, ref:A, ref:1 for example) to STDOUT and quit. '
85
+ 'You can get available fields with options `--list` and `-t`' 'For example: `sacrebleu -t wmt21 --list`. '
86
+ 'If multiple fields are given, they are outputted with tsv format in the order they are given.'
87
+ 'You can also use `--echo all` to output all available fields.')
88
+
89
+ # I/O related arguments
90
+ # Multiple input files can be provided for significance testing for example
91
+ arg_parser.add_argument('--input', '-i', type=str, nargs='*', default=None,
92
+ help='Read input from file(s) instead of STDIN.')
93
+ arg_parser.add_argument('refs', nargs='*', default=[],
94
+ help='Optional list of references. If given, it should preceed the -i/--input argument.')
95
+ arg_parser.add_argument('--num-refs', '-nr', type=int, default=1,
96
+ help='Split the reference stream on tabs, and expect this many references. (Default: %(default)s)')
97
+ arg_parser.add_argument('--encoding', '-e', type=str, default='utf-8',
98
+ help='Open text files with specified encoding (Default: %(default)s)')
99
+
100
+ # Metric selection
101
+ avail_metrics = [m.lower() for m in METRICS]
102
+ arg_parser.add_argument('--metrics', '-m', choices=avail_metrics, nargs='+', default=['bleu'],
103
+ help='Space-delimited list of metrics to compute (Default: bleu)')
104
+ arg_parser.add_argument('--sentence-level', '-sl', action='store_true', help='Compute metric for each sentence.')
105
+
106
+ # BLEU-related arguments
107
+ # since sacreBLEU had only support for BLEU initially, the argument names
108
+ # are not prefixed with 'bleu' as in chrF arguments for example.
109
+ # Let's do that manually here through dest= options, as otherwise
110
+ # things will get quite hard to maintain when other metrics are added.
111
+ bleu_args = arg_parser.add_argument_group('BLEU related arguments')
112
+
113
+ bleu_args.add_argument('--smooth-method', '-s', choices=METRICS['BLEU'].SMOOTH_DEFAULTS.keys(), default='exp',
114
+ dest='bleu_smooth_method',
115
+ help='Smoothing method: exponential decay, floor (increment zero counts), add-k (increment num/denom by k for n>1), or none. (Default: %(default)s)')
116
+ bleu_args.add_argument('--smooth-value', '-sv', type=float, default=None,
117
+ dest='bleu_smooth_value',
118
+ help='The smoothing value. Only valid for floor and add-k. '
119
+ f"(Defaults: floor: {METRICS['BLEU'].SMOOTH_DEFAULTS['floor']}, "
120
+ f"add-k: {METRICS['BLEU'].SMOOTH_DEFAULTS['add-k']})")
121
+ bleu_args.add_argument('--tokenize', '-tok', choices=METRICS['BLEU'].TOKENIZERS, default=None,
122
+ dest='bleu_tokenize',
123
+ help='Tokenization method to use for BLEU. If not provided, defaults to `zh` for Chinese, '
124
+ '`ja-mecab` for Japanese, `ko-mecab` for Korean and `13a` (mteval) otherwise.')
125
+ bleu_args.add_argument('--lowercase', '-lc', dest='bleu_lowercase', action='store_true', default=False,
126
+ help='If True, enables case-insensitivity. (Default: %(default)s)')
127
+ bleu_args.add_argument('--force', default=False, action='store_true',
128
+ dest='bleu_force', help='Insist that your tokenized input is actually detokenized.')
129
+
130
+ # ChrF-related arguments
131
+ chrf_args = arg_parser.add_argument_group('chrF related arguments')
132
+ chrf_args.add_argument('--chrf-char-order', '-cc', type=int, default=METRICS['CHRF'].CHAR_ORDER,
133
+ help='Character n-gram order. (Default: %(default)s)')
134
+ chrf_args.add_argument('--chrf-word-order', '-cw', type=int, default=METRICS['CHRF'].WORD_ORDER,
135
+ help='Word n-gram order (Default: %(default)s). If equals to 2, the metric is referred to as chrF++.')
136
+ chrf_args.add_argument('--chrf-beta', type=int, default=METRICS['CHRF'].BETA,
137
+ help='Determine the importance of recall w.r.t precision. (Default: %(default)s)')
138
+ chrf_args.add_argument('--chrf-whitespace', action='store_true', default=False,
139
+ help='Include whitespaces when extracting character n-grams. (Default: %(default)s)')
140
+ chrf_args.add_argument('--chrf-lowercase', action='store_true', default=False,
141
+ help='Enable case-insensitivity. (Default: %(default)s)')
142
+ chrf_args.add_argument('--chrf-eps-smoothing', action='store_true', default=False,
143
+ help='Enables epsilon smoothing similar to chrF++.py, NLTK and Moses; instead of effective order smoothing. (Default: %(default)s)')
144
+
145
+ # TER related arguments
146
+ ter_args = arg_parser.add_argument_group("TER related arguments (The defaults replicate TERCOM's behavior)")
147
+ ter_args.add_argument('--ter-case-sensitive', action='store_true',
148
+ help='Enables case sensitivity. (Default: %(default)s)')
149
+ ter_args.add_argument('--ter-asian-support', action='store_true',
150
+ help='Enables special treatment of Asian characters. (Default: %(default)s)')
151
+ ter_args.add_argument('--ter-no-punct', action='store_true',
152
+ help='Removes punctuation. (Default: %(default)s)')
153
+ ter_args.add_argument('--ter-normalized', action='store_true',
154
+ help='Applies basic normalization and tokenization. (Default: %(default)s)')
155
+
156
+ # Bootstrap resampling for confidence intervals
157
+ sign_args = arg_parser.add_argument_group('Confidence interval (CI) estimation for single-system evaluation')
158
+ sign_args.add_argument('--confidence', '-ci', action='store_true',
159
+ help='Report confidence interval using bootstrap resampling.')
160
+ sign_args.add_argument('--confidence-n', '-cin', type=int, default=1000,
161
+ help='Set the number of bootstrap resamples for CI estimation (Default: %(default)s).')
162
+
163
+ # Paired significance testing
164
+ pair_args = arg_parser.add_argument_group('Paired significance testing for multi-system evaluation')
165
+ pair_args_choice = pair_args.add_mutually_exclusive_group()
166
+
167
+ pair_args_choice.add_argument('--paired-ar', '-par', action='store_true',
168
+ help='Perform paired test using approximate randomization (AR). This option is '
169
+ 'mutually exclusive with --paired-bs (Default: %(default)s).')
170
+ pair_args_choice.add_argument('--paired-bs', '-pbs', action='store_true',
171
+ help='Perform paired test using bootstrap resampling. This option is '
172
+ 'mutually exclusive with --paired-ar (Default: %(default)s).')
173
+
174
+ pair_args.add_argument('--paired-ar-n', '-parn', type=int, default=10000,
175
+ help='Number of trials for approximate randomization test (Default: %(default)s).')
176
+
177
+ pair_args.add_argument('--paired-bs-n', '-pbsn', type=int, default=1000,
178
+ help='Number of bootstrap resamples for paired bootstrap resampling test (Default: %(default)s).')
179
+
180
+ pair_args.add_argument('--paired-jobs', '-j', type=int, default=1,
181
+ help='If 0, launches as many workers as the number of systems. If > 0, sets the number of workers manually. '
182
+ 'This feature is currently not supported on Windows.')
183
+
184
+ # Reporting related arguments
185
+ report_args = arg_parser.add_argument_group('Reporting related arguments')
186
+ report_args.add_argument('--quiet', '-q', default=False, action='store_true',
187
+ help='Suppress verbose messages.')
188
+ report_args.add_argument('--short', '-sh', default=False, action='store_true',
189
+ help='Produce a shorter (less human readable) signature.')
190
+ report_args.add_argument('--score-only', '-b', default=False, action='store_true',
191
+ help='Print only the computed score.')
192
+ report_args.add_argument('--width', '-w', type=int, default=1,
193
+ help='Floating point width (Default: %(default)s).')
194
+ report_args.add_argument('--detail', '-d', default=False, action='store_true',
195
+ help='Print detailed information (split test sets based on origlang).')
196
+ report_args.add_argument('--no-color', '-nc', action='store_true',
197
+ help='Disable the occasional use of terminal colors.')
198
+
199
+ output_formats = ['json', 'text', 'latex']
200
+ report_args.add_argument('--format', '-f', default='json', choices=output_formats,
201
+ help='Set the output format. `latex` is only valid for multi-system mode whereas '
202
+ '`json` and `text` apply to single-system mode only. This flag is overridden if the '
203
+ 'SACREBLEU_FORMAT environment variable is set to one of the valid choices (Default: %(default)s).')
204
+
205
+ arg_parser.add_argument('--version', '-V', action='version', version='%(prog)s {}'.format(VERSION))
206
+
207
+ args = arg_parser.parse_args()
208
+
209
+ # Override the format from the environment, if any
210
+ if 'SACREBLEU_FORMAT' in os.environ:
211
+ _new_value = os.environ['SACREBLEU_FORMAT'].lower()
212
+ if _new_value in output_formats:
213
+ args.format = _new_value
214
+
215
+ return args
216
+
217
+
218
+ def main():
219
+ args = parse_args()
220
+
221
+ # Is paired test requested?
222
+ paired_test_mode = args.paired_bs or args.paired_ar
223
+
224
+ # Explicitly set the encoding
225
+ sys.stdin = open(sys.stdin.fileno(), mode='r', encoding='utf-8', buffering=True, newline="\n")
226
+ sys.stdout = open(sys.stdout.fileno(), mode='w', encoding='utf-8', buffering=True)
227
+
228
+ if os.environ.get('NO_COLOR', False) or args.no_color:
229
+ Color.ENABLE_COLORS = False
230
+ else:
231
+ # These should come after all stdout manipulations otherwise cause
232
+ # issues esp. on Windows
233
+ import colorama
234
+ colorama.init()
235
+
236
+ if not args.quiet:
237
+ logging.basicConfig(level=logging.INFO, format='sacreBLEU: %(message)s')
238
+
239
+ if args.download:
240
+ download_test_set(args.download, args.langpair)
241
+ sys.exit(0)
242
+
243
+ if args.list:
244
+ if args.test_set:
245
+ for pair in [args.langpair] if args.langpair else get_langpairs_for_testset(args.test_set):
246
+ fields = DATASETS[args.test_set].fieldnames(pair)
247
+ print(f'{pair}: {", ".join(fields)}')
248
+ else:
249
+ if args.langpair:
250
+ print(f'The available test sets for {args.langpair} are:')
251
+ testsets = get_available_testsets_for_langpair(args.langpair)
252
+ else:
253
+ print('The available test sets are:')
254
+ testsets = get_available_testsets()
255
+ for testset in sorted(testsets):
256
+ desc = DATASETS[testset].description.strip()
257
+ print(f'{testset:<30}: {desc}')
258
+ sys.exit(0)
259
+
260
+ if args.sentence_level and len(args.metrics) > 1:
261
+ sacrelogger.error('Only one metric can be used in sentence-level mode.')
262
+ sys.exit(1)
263
+
264
+ if args.citation:
265
+ if not args.test_set:
266
+ sacrelogger.error('I need a test set (-t).')
267
+ sys.exit(1)
268
+ for test_set in args.test_set.split(','):
269
+ if 'citation' not in DATASETS[test_set]:
270
+ sacrelogger.error(f'No citation found for {test_set}')
271
+ else:
272
+ print(DATASETS[test_set].citation)
273
+ sys.exit(0)
274
+
275
+ if args.num_refs != 1 and (args.test_set is not None or len(args.refs) > 1):
276
+ sacrelogger.error('The --num-refs argument allows you to provide any number of tab-delimited references in a single file.')
277
+ sacrelogger.error('You can only use it with externally provided references, however (i.e., not with `-t`),')
278
+ sacrelogger.error('and you cannot then provide multiple reference files.')
279
+ sys.exit(1)
280
+
281
+ if args.test_set is not None:
282
+ for test_set in args.test_set.split(','):
283
+ if test_set not in DATASETS:
284
+ sacrelogger.error(f'Unknown test set {test_set!r}')
285
+ sacrelogger.error('Please run with --list to see the available test sets.')
286
+ sys.exit(1)
287
+
288
+ if args.test_set is None:
289
+ if len(args.refs) == 0:
290
+ sacrelogger.error('If manual references given, make sure to provide them '
291
+ 'before the -i/--input argument to avoid confusion.')
292
+ sacrelogger.error('Otherwise, I need a predefined test set (-t) from the following list:')
293
+ sacrelogger.error(get_available_testsets())
294
+ sys.exit(1)
295
+ elif len(args.refs) > 0:
296
+ sacrelogger.error('I need exactly one of (a) a predefined test set (-t) or (b) a list of references')
297
+ sys.exit(1)
298
+ elif args.langpair is None:
299
+ sacrelogger.error('I need a language pair (-l). Use --list to see available language pairs for this test set.')
300
+ sys.exit(1)
301
+ else:
302
+ for test_set in args.test_set.split(','):
303
+ langpairs = get_langpairs_for_testset(test_set)
304
+ if args.langpair not in langpairs:
305
+ sacrelogger.error(f'No such language pair {args.langpair!r}')
306
+ sacrelogger.error(f'Available language pairs for {test_set!r} are:')
307
+ for lp in langpairs:
308
+ sacrelogger.error(f' > {lp}')
309
+ sys.exit(1)
310
+
311
+ if args.echo:
312
+ if args.langpair is None or args.test_set is None:
313
+ sacrelogger.warning("--echo requires a test set (--t) and a language pair (-l)")
314
+ sys.exit(1)
315
+ for test_set in args.test_set.split(','):
316
+ print_test_set(test_set, args.langpair, args.echo, args.origlang, args.subset)
317
+ sys.exit(0)
318
+
319
+ # Hack: inject target language info for BLEU, so that it can
320
+ # select the tokenizer based on it
321
+ if args.langpair:
322
+ args.bleu_trg_lang = args.langpair.split('-')[1]
323
+
324
+ if args.test_set is not None and args.bleu_tokenize == 'none':
325
+ sacrelogger.warning(
326
+ "You are turning off BLEU's internal tokenizer "
327
+ "presumably to supply your own tokenized files.")
328
+ sacrelogger.warning(
329
+ "Published numbers will not be comparable to other papers.")
330
+
331
+ # concat_ref_files is a list of list of reference filenames
332
+ # (concatenation happens if multiple test sets are given through -t)
333
+ # Example: [[testset1_refA, testset1_refB], [testset2_refA, testset2_refB]]
334
+ concat_ref_files = []
335
+ if args.test_set is None:
336
+ concat_ref_files.append(args.refs)
337
+ else:
338
+ # Multiple test sets can be given
339
+ for test_set in args.test_set.split(','):
340
+ ref_files = get_reference_files(test_set, args.langpair)
341
+ if len(ref_files) == 0:
342
+ sacrelogger.warning(
343
+ f'No references found for test set {test_set}/{args.langpair}.')
344
+ concat_ref_files.append(ref_files)
345
+
346
+ #################
347
+ # Read references
348
+ #################
349
+ full_refs = [[] for x in range(max(len(concat_ref_files[0]), args.num_refs))]
350
+ for ref_files in concat_ref_files:
351
+ for refno, ref_file in enumerate(ref_files):
352
+ for lineno, line in enumerate(smart_open(ref_file, encoding=args.encoding), 1):
353
+ line = line.rstrip()
354
+ if args.num_refs == 1:
355
+ full_refs[refno].append(line)
356
+ else:
357
+ refs = line.split(sep='\t', maxsplit=args.num_refs - 1)
358
+ # We are strict in fixed number of references through CLI
359
+ # But the API supports having variable refs per each segment
360
+ # by simply having '' or None's as dummy placeholders
361
+ if len(refs) != args.num_refs:
362
+ sacrelogger.error(f'FATAL: line {lineno}: expected {args.num_refs} fields, but found {len(refs)}.')
363
+ sys.exit(17)
364
+ for refno, ref in enumerate(refs):
365
+ full_refs[refno].append(ref)
366
+
367
+ # Decide on the number of final references, override the argument
368
+ args.num_refs = len(full_refs)
369
+
370
+ # Read hypotheses
371
+ # Can't tokenize yet as each metric has its own way of tokenizing things
372
+ full_systems, sys_names = [], []
373
+
374
+ if args.input is None:
375
+ # Read from STDIN
376
+ inputfh = io.TextIOWrapper(sys.stdin.buffer, encoding=args.encoding)
377
+
378
+ # guess the number of systems by looking at the first line
379
+ fields = inputfh.readline().rstrip().split('\t')
380
+
381
+ # Set number of systems
382
+ num_sys = len(fields)
383
+
384
+ # place the first lines already
385
+ full_systems = [[s] for s in fields]
386
+
387
+ # Enumerate the systems
388
+ sys_names = [f'System {i + 1}' for i in range(num_sys)]
389
+
390
+ # Read the rest
391
+ for line in inputfh:
392
+ fields = line.rstrip().split('\t')
393
+ if len(fields) != num_sys:
394
+ sacrelogger.error('FATAL: the number of tab-delimited fields in the input stream differ across lines.')
395
+ sys.exit(17)
396
+ # Place systems into the list
397
+ for sys_idx, sent in enumerate(fields):
398
+ full_systems[sys_idx].append(sent.rstrip())
399
+ else:
400
+ # Separate files are given for each system output
401
+ # Ex: --input smt.txt nmt.txt
402
+ for fname in args.input:
403
+ sys_name = fname
404
+
405
+ if sys_name in sys_names:
406
+ if paired_test_mode and sys_name == sys_names[0]:
407
+ # We skip loading a system, if it was already the baseline
408
+ sacrelogger.info(f'Ignoring {sys_name!r} as it was also given as the baseline.')
409
+ continue
410
+ else:
411
+ # To avoid ambiguities, we fail if two systems have same names
412
+ sacrelogger.error(f"{sys_name!r} already used to name a system.")
413
+ sacrelogger.error("Make sure to have a different basename for each system.")
414
+ sys.exit(1)
415
+
416
+ # Read the system
417
+ lines = []
418
+ for line in smart_open(fname, encoding=args.encoding):
419
+ lines.append(line.rstrip())
420
+ full_systems.append(lines)
421
+ sys_names.append(sys_name)
422
+
423
+ # Set final number of systems
424
+ num_sys = len(sys_names)
425
+
426
+ # Add baseline prefix to the first system for clarity
427
+ if paired_test_mode:
428
+ if args.input is None:
429
+ # STDIN mode, no explicit system names
430
+ sys_names = ['Baseline'] + [f'System {i + 1}' for i in range(num_sys - 1)]
431
+ else:
432
+ # --input mode, we have names for the systems, just change the 1st one
433
+ sys_names[0] = f'Baseline: {sys_names[0]}'
434
+
435
+ if args.sentence_level:
436
+ if num_sys > 1:
437
+ sacrelogger.error('Only one system can be evaluated in sentence-level mode.')
438
+ sys.exit(1)
439
+ if args.confidence or paired_test_mode:
440
+ sacrelogger.error('Statistical tests are unavailable in sentence-level mode.')
441
+ sys.exit(1)
442
+
443
+ # >=2.0.0: effective_order is now part of BLEU class. For sentence-BLEU
444
+ # we now need to explicitly enable it without user's intervention
445
+ # for backward compatibility.
446
+ args.bleu_effective_order = True
447
+
448
+ if paired_test_mode and num_sys == 1:
449
+ sacrelogger.error('Paired tests require multiple input systems given to --input (-i).')
450
+ sys.exit(1)
451
+
452
+ if num_sys > 1 and args.confidence:
453
+ sacrelogger.error('Use paired tests (--paired) for multiple systems.')
454
+ sys.exit(1)
455
+
456
+ # Filter subsets if requested
457
+ outputs = filter_subset(
458
+ [*full_systems, *full_refs], args.test_set, args.langpair,
459
+ args.origlang, args.subset)
460
+
461
+ # Unpack systems & references back
462
+ systems, refs = outputs[:num_sys], outputs[num_sys:]
463
+
464
+ # Perform some sanity checks
465
+ for system in systems:
466
+ if len(system) == 0:
467
+ message = f'Test set {args.test_set!r} contains no sentence'
468
+ if args.origlang is not None or args.subset is not None:
469
+ message += ' with'
470
+ if args.origlang:
471
+ message += f' origlang={args.origlang}'
472
+ if args.subset:
473
+ message += f' subset={args.subset}' + args.subset
474
+ sacrelogger.error(message)
475
+ sys.exit(1)
476
+
477
+ # Check lengths
478
+ sanity_check_lengths(system, refs, test_set=args.test_set)
479
+
480
+ # Create the metrics
481
+ metrics = {}
482
+ for name in args.metrics:
483
+ # Each metric's specific arguments are prefixed with `metricname_`
484
+ # for grouping. Filter accordingly and strip the prefixes prior to
485
+ # metric object construction.
486
+ metric_args = args_to_dict(args, name.lower(), strip_prefix=True)
487
+
488
+ # This will cache reference stats for faster re-computation if required
489
+ metric_args['references'] = refs
490
+
491
+ # Make it uppercase for the rest of the code
492
+ name = name.upper()
493
+ metrics[name] = METRICS[name](**metric_args)
494
+
495
+ # Handle sentence level and quit
496
+ if args.sentence_level:
497
+ # one metric and one system in use for sentence-level
498
+ metric, system = list(metrics.values())[0], systems[0]
499
+
500
+ for hypothesis, *references in zip(system, *refs):
501
+ score = metric.sentence_score(hypothesis, references)
502
+ sig = metric.get_signature().format(args.short)
503
+ print(score.format(args.width, args.score_only, sig))
504
+
505
+ sys.exit(0)
506
+
507
+ if args.detail and args.format == 'json':
508
+ # The translationese info will interfere with JSON output, disable
509
+ args.format = 'text'
510
+
511
+ ##############################
512
+ # Corpus level evaluation mode
513
+ ##############################
514
+ if num_sys == 1:
515
+ # Single system evaluation mode
516
+ results = []
517
+ for name in sorted(metrics):
518
+ # compute the score
519
+ score = metrics[name].corpus_score(
520
+ system, references=None,
521
+ n_bootstrap=args.confidence_n if args.confidence else 1)
522
+ # get the signature
523
+ sig = metrics[name].get_signature().format(
524
+ args.short if args.format != 'json' else False)
525
+ results.append(
526
+ score.format(args.width, args.score_only, sig, args.format == 'json'))
527
+
528
+ print_single_results(results, args)
529
+
530
+ # Prints detailed information for translationese effect experiments
531
+ if args.detail:
532
+ print_subset_results(metrics, full_systems[0], full_refs, args)
533
+ else:
534
+ # Multi-system evaluation mode
535
+ named_systems = [(sys_names[i], systems[i]) for i in range(num_sys)]
536
+ sacrelogger.info(f'Found {num_sys} systems.')
537
+
538
+ if not paired_test_mode:
539
+ # Bootstrap resampling or the usual single score computation mode
540
+ sigs = {}
541
+ scores = defaultdict(list)
542
+ scores['System'] = sys_names
543
+
544
+ for sys_name, system in named_systems:
545
+ for name in sorted(metrics):
546
+ score = metrics[name].corpus_score(system, references=None)
547
+ sigs[score.name] = metrics[name].get_signature().format(args.short)
548
+ scores[score.name].append(score.format(args.width, True))
549
+
550
+ else:
551
+ # Paired significance testing mode
552
+ from .significance import PairedTest
553
+
554
+ # Set params
555
+ test_type = 'bs' if args.paired_bs else 'ar'
556
+ n_samples = args.paired_bs_n if args.paired_bs else args.paired_ar_n
557
+
558
+ ps = PairedTest(named_systems, metrics, references=None,
559
+ test_type=test_type, n_samples=n_samples,
560
+ n_jobs=args.paired_jobs)
561
+
562
+ # Set back the number of trials
563
+ args.paired_n = ps.n_samples
564
+
565
+ # Run the test
566
+ sigs, scores = ps()
567
+
568
+ # Get signature strings
569
+ sigs = {k: v.format(args.short) for k, v in sigs.items()}
570
+
571
+ # Dump the results
572
+ print_results_table(scores, sigs, args)
573
+
574
+
575
+ if __name__ == '__main__':
576
+ main()
evalkit_eagle/lib/python3.10/site-packages/sacrebleu/significance.py ADDED
@@ -0,0 +1,435 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import logging
3
+ import multiprocessing as mp
4
+ from typing import Sequence, Dict, Optional, Tuple, List, Union, Any, Mapping
5
+
6
+ import numpy as np
7
+
8
+ from .metrics.base import Metric, Score, Signature
9
+
10
+ IS_WINDOWS = os.name == 'nt'
11
+
12
+
13
+ sacrelogger = logging.getLogger('sacrebleu')
14
+
15
+
16
+ class Result:
17
+ """A container to represent results from a particular statistical
18
+ significance test.
19
+ :param score: The floating point score for the system at hand.
20
+ :param p_value: If exists, represents the p-value when the system at
21
+ hand is compared to a baseline using a paired test.
22
+ :param mean: When paired bootstrap test is applied, this represents
23
+ the true mean score estimated from bootstrap resamples of the system.
24
+ :param ci: When paired bootstrap test is applied, this represents
25
+ the 95% confidence interval around the true mean score `sys_mean`.
26
+ """
27
+ def __init__(self, score: float, p_value: Optional[float] = None,
28
+ mean: Optional[float] = None, ci: Optional[float] = None):
29
+ self.score = score
30
+ self.p_value = p_value
31
+ self.mean = mean
32
+ self.ci = ci
33
+
34
+ def __repr__(self):
35
+ return ','.join([f'{k}={str(v)}' for k, v in self.__dict__.items()])
36
+
37
+
38
+ def estimate_ci(scores: np.ndarray) -> Tuple[float, float]:
39
+ """Takes a list of scores and returns mean and 95% confidence
40
+ interval around the mean.
41
+
42
+ :param scores: A list of floating point scores.
43
+ :return: A tuple of mean and the 95% CI.
44
+ """
45
+ # Sort the scores
46
+ scores = np.sort(scores)
47
+ n = len(scores)
48
+
49
+ # Get CI bounds (95%, i.e. 1/40 from left)
50
+ lower_idx = n // 40
51
+ upper_idx = n - lower_idx - 1
52
+ lower, upper = scores[lower_idx], scores[upper_idx]
53
+ ci = 0.5 * (upper - lower)
54
+ return (scores.mean(), ci)
55
+
56
+
57
+ def _bootstrap_resample(stats: List[List[Union[int, float]]],
58
+ metric: Metric, n_samples: int = 1000) -> Tuple[str, List[Score]]:
59
+ """Performs bootstrap resampling for a single system to estimate
60
+ a confidence interval around the true mean.
61
+ :param stats: A list of statistics extracted from the system's hypotheses.
62
+ :param metric: The `Metric` instance to be used for score computation.
63
+ :n_samples: Number of bootstrap resamples to use.
64
+
65
+ :return: A tuple of the seed choice as string and the list of `Score`
66
+ instances for all bootstrap resamples.
67
+ """
68
+
69
+ # Set numpy RNG's seed
70
+ # If given -> Fix to the given value
71
+ # If given but =='[Nn]one', don't fix the seed i.e. pull entropy from OS
72
+ seed = os.environ.get('SACREBLEU_SEED', '12345')
73
+ _seed = None if seed.lower() == 'none' else int(seed)
74
+ rng = np.random.default_rng(_seed)
75
+
76
+ # The indices that'll produce all bootstrap resamples at once
77
+ idxs = rng.choice(len(stats), size=(n_samples, len(stats)), replace=True)
78
+
79
+ # convert to numpy array. float32 is more efficient
80
+ stats_np = np.array(stats, dtype='float32')
81
+
82
+ # recompute scores for all resamples
83
+ scores = [
84
+ metric._compute_score_from_stats(_s.sum(0)) for _s in stats_np[idxs]]
85
+
86
+ return str(seed).lower(), scores
87
+
88
+
89
+ def _compute_p_value(stats: np.ndarray, real_difference: float) -> float:
90
+ """Computes the p-value given the sample statistics and the real statistic.
91
+ :param stats: A numpy array with the sample statistics.
92
+ :real_difference: The real statistic.
93
+ :return: The p-value.
94
+ """
95
+ # Taken from: significance/StratifiedApproximateRandomizationTest.java
96
+ # https://github.com/jhclark/multeval.git
97
+
98
+ # "the != is important. if we want to score the same system against itself
99
+ # having a zero difference should not be attributed to chance."
100
+
101
+ c = np.sum(stats > real_difference).item()
102
+
103
+ # "+1 applies here, though it only matters for small numbers of shufflings,
104
+ # which we typically never do. it's necessary to ensure the probability of
105
+ # falsely rejecting the null hypothesis is no greater than the rejection
106
+ # level of the test (see william and morgan on significance tests)
107
+ p = (c + 1) / (len(stats) + 1)
108
+
109
+ return p
110
+
111
+
112
+ def _paired_ar_test(baseline_info: Dict[str, Tuple[np.ndarray, Result]],
113
+ sys_name: str,
114
+ hypotheses: Sequence[str],
115
+ references: Optional[Sequence[Sequence[str]]],
116
+ metrics: Dict[str, Metric],
117
+ n_samples: int = 10000,
118
+ n_ar_confidence: int = -1,
119
+ seed: Optional[int] = None) -> Tuple[str, Dict[str, Result]]:
120
+ """Paired two-sided approximate randomization (AR) test for MT evaluation.
121
+
122
+ :param baseline_info: A dictionary with `Metric` instances as the keys,
123
+ that contains sufficient statistics and a `Result` instance for the baseline system.
124
+ :param sys_name: The name of the system to be evaluated.
125
+ :param hypotheses: A sequence of string hypotheses for the system.
126
+ :param references: A sequence of reference documents with document being
127
+ defined as a sequence of reference strings. If `None`, references
128
+ will be used through each metric's internal cache.
129
+ :param metrics: A dictionary of `Metric` instances that will be computed
130
+ for each system.
131
+ :param n_samples: The number of AR trials.
132
+ :param n_ar_confidence: The number of bootstrap resamples to use for
133
+ confidence estimation. A value of -1 disables confidence estimation.
134
+ :param seed: The seed value for the RNG. If `None`, the RNG will not be
135
+ fixed to a particular seed.
136
+
137
+ :return: A tuple with first element being the system name and the second
138
+ being a `Result` namedtuple.
139
+ """
140
+ # Seed the RNG
141
+ rng = np.random.default_rng(seed)
142
+
143
+ # Generate indices that'll select stats
144
+ pos_sel = rng.integers(2, size=(n_samples, len(hypotheses)), dtype=bool)
145
+
146
+ # Flip mask to obtain selectors for system hypotheses
147
+ neg_sel = ~pos_sel
148
+
149
+ if n_ar_confidence > 0:
150
+ # Perform confidence estimation as well
151
+ bs_idxs = rng.choice(
152
+ len(hypotheses), size=(n_ar_confidence, len(hypotheses)), replace=True)
153
+
154
+ results = {}
155
+
156
+ for name, metric in metrics.items():
157
+ # Use pre-computed match stats for the baseline
158
+ bl_stats, bl_result = baseline_info[name]
159
+
160
+ # Compute system's stats and score
161
+ sacrelogger.info(f'Computing {name} for {sys_name!r} and extracting sufficient statistics')
162
+ sys_stats = metric._extract_corpus_statistics(hypotheses, references)
163
+ sys_score = metric._aggregate_and_compute(sys_stats)
164
+
165
+ # original test statistic: absolute difference between baseline and the system
166
+ diff = abs(bl_result.score - sys_score.score)
167
+
168
+ sacrelogger.info(f' > Performing approximate randomization test (# trials: {n_samples})')
169
+ # get shuffled pseudo systems
170
+ shuf_a = pos_sel @ bl_stats + neg_sel @ sys_stats
171
+ shuf_b = neg_sel @ bl_stats + pos_sel @ sys_stats
172
+
173
+ # Aggregate trial stats and compute scores for each
174
+ scores_a = np.array(
175
+ [metric._aggregate_and_compute(x).score for x in shuf_a[:, None]])
176
+ scores_b = np.array(
177
+ [metric._aggregate_and_compute(x).score for x in shuf_b[:, None]])
178
+
179
+ # Count the statistical difference and compute the p-value
180
+ p = _compute_p_value(
181
+ np.abs(np.array(scores_a) - np.array(scores_b)), diff)
182
+
183
+ res = Result(sys_score.score, p)
184
+
185
+ if n_ar_confidence > 0:
186
+ sacrelogger.info(f' > Performing bootstrap resampling for confidence interval (# resamples: {n_ar_confidence})')
187
+ sys_stats = np.array(sys_stats, dtype='float32')
188
+ # recompute scores for all resamples
189
+ sys_scores = np.array([
190
+ metric._compute_score_from_stats(_s.sum(0)).score for _s in sys_stats[bs_idxs]
191
+ ])
192
+ res.mean, res.ci = estimate_ci(sys_scores)
193
+
194
+ # Store the result
195
+ results[name] = res
196
+
197
+ return sys_name, results
198
+
199
+
200
+ def _paired_bs_test(baseline_info: Dict[str, Tuple[np.ndarray, Result]],
201
+ sys_name: str,
202
+ hypotheses: Sequence[str],
203
+ references: Optional[Sequence[Sequence[str]]],
204
+ metrics: Dict[str, Metric],
205
+ n_samples: int = 1000,
206
+ n_ar_confidence: int = -1,
207
+ seed: Optional[int] = None) -> Tuple[str, Dict[str, Result]]:
208
+ """Paired bootstrap resampling test for MT evaluation. This function
209
+ replicates the behavior of the Moses script called
210
+ `bootstrap-hypothesis-difference-significance.pl`.
211
+
212
+ :param baseline_info: A dictionary with `Metric` instances as the keys,
213
+ that contains sufficient statistics and a `Result` instance for the baseline system.
214
+ :param sys_name: The name of the system to be evaluated.
215
+ :param hypotheses: A sequence of string hypotheses for the system.
216
+ :param references: A sequence of reference documents with document being
217
+ defined as a sequence of reference strings. If `None`, references
218
+ will be used through each metric's internal cache.
219
+ :param metrics: A dictionary of `Metric` instances that will be computed
220
+ for each system.
221
+ :param n_samples: The number of bootstrap resamples.
222
+ :param n_ar_confidence: This parameter is not used for this function but
223
+ is there for signature compatibility in the API.
224
+ :param seed: The seed value for the RNG. If `None`, the RNG will not be
225
+ fixed to a particular seed.
226
+
227
+ :return: A tuple with first element being the system name and the second
228
+ being a `Result` namedtuple.
229
+ """
230
+ # Seed the RNG
231
+ rng = np.random.default_rng(seed)
232
+
233
+ results = {}
234
+
235
+ # It takes ~10ms to generated the indices
236
+ idxs = rng.choice(
237
+ len(hypotheses), size=(n_samples, len(hypotheses)), replace=True)
238
+
239
+ for name, metric in metrics.items():
240
+ # Use pre-computed match stats for the baseline
241
+ bl_stats, bl_result = baseline_info[name]
242
+
243
+ # Compute system's stats and score
244
+ sacrelogger.info(f'Computing {name} for {sys_name!r} and extracting sufficient statistics')
245
+ sys_stats = metric._extract_corpus_statistics(hypotheses, references)
246
+ sys_score = metric._aggregate_and_compute(sys_stats)
247
+
248
+ # Convert to numpy arrays for efficient indexing
249
+ sys_stats = np.array(sys_stats, dtype='float32')
250
+ bl_stats = np.array(bl_stats, dtype='float32')
251
+
252
+ # original test statistic: absolute difference between baseline and the system
253
+ diff = abs(bl_result.score - sys_score.score)
254
+
255
+ sacrelogger.info(f' > Performing paired bootstrap resampling test (# resamples: {n_samples})')
256
+ scores_bl = np.array(
257
+ [metric._compute_score_from_stats(_s.sum(0)).score for _s in bl_stats[idxs]])
258
+ scores_sys = np.array(
259
+ [metric._compute_score_from_stats(_s.sum(0)).score for _s in sys_stats[idxs]])
260
+
261
+ # Compute CI as well
262
+ sys_mean, sys_ci = estimate_ci(scores_sys)
263
+
264
+ # Compute the statistics
265
+ sample_diffs = np.abs(scores_sys - scores_bl)
266
+ stats = sample_diffs - sample_diffs.mean()
267
+
268
+ # Count the statistical difference and compute the p-value
269
+ p = _compute_p_value(stats, diff)
270
+
271
+ results[name] = Result(sys_score.score, p, sys_mean, sys_ci)
272
+
273
+ return sys_name, results
274
+
275
+
276
+ class PairedTest:
277
+ """This is the manager class that will call the actual standalone implementation
278
+ for approximate randomization or paired bootstrap resampling, based on the
279
+ `test_type` argument.
280
+
281
+ :param named_systems: A lisf of (system_name, system_hypotheses) tuples on
282
+ which the test will be applied.
283
+ :param metrics: A dictionary of `Metric` instances that will be computed
284
+ for each system.
285
+ :param references: A sequence of reference documents with document being
286
+ defined as a sequence of reference strings. If `None`, already cached references
287
+ will be used through each metric's internal cache.
288
+ :param test_type: `ar` for approximate randomization, `bs` for paired bootstrap.
289
+ :param n_samples: The number of AR trials (for `ar`) or bootstrap resamples (for `bs`).
290
+ The defaults (10000 or 1000 respectively) will be used if 0 is passed.
291
+ :param n_ar_confidence: If `approximate randomization` is selected, the number
292
+ of bootstrap resamples to use for confidence estimation. A value of -1 disables
293
+ confidence estimation. 0 will use the default of 1000.
294
+ :param n_jobs: If 0, a worker process will be spawned for each system variant.
295
+ If > 0, the number of workers will be set accordingly. The default of 1
296
+ does not use multi-processing.
297
+ """
298
+ _DEFAULT_SAMPLES = {
299
+ 'ar': 10000,
300
+ 'bs': 1000,
301
+ }
302
+
303
+ def __init__(self, named_systems: List[Tuple[str, Sequence[str]]],
304
+ metrics: Mapping[str, Metric],
305
+ references: Optional[Sequence[Sequence[str]]],
306
+ test_type: str = 'ar',
307
+ n_samples: int = 0,
308
+ n_ar_confidence: int = -1,
309
+ n_jobs: int = 1):
310
+ assert test_type in ('ar', 'bs'), f"Unknown test type {test_type!r}"
311
+ self.test_type = test_type
312
+
313
+ # Set method
314
+ if self.test_type == 'ar':
315
+ self._fn = _paired_ar_test
316
+ elif self.test_type == 'bs':
317
+ self._fn = _paired_bs_test
318
+
319
+ # Set numpy RNG's seed
320
+ # If given -> Fix to the given value
321
+ # If given but =='[Nn]one', don't fix the seed i.e. pull entropy from OS
322
+ seed = os.environ.get('SACREBLEU_SEED', '12345')
323
+ self._seed = None if seed.lower() == 'none' else int(seed)
324
+ self.n_jobs = n_jobs
325
+ self.references = references
326
+ self.named_systems = named_systems
327
+
328
+ # Set the defaults if requested
329
+ self.n_ar_confidence = n_ar_confidence if n_ar_confidence != 0 else \
330
+ self._DEFAULT_SAMPLES['bs']
331
+
332
+ self.n_samples = n_samples if n_samples > 0 else \
333
+ self._DEFAULT_SAMPLES[self.test_type]
334
+
335
+ # Number of systems (excluding the baseline)
336
+ self.n_systems = len(named_systems) - 1
337
+
338
+ # Decide on number of workers
339
+ if IS_WINDOWS:
340
+ sacrelogger.warning('Parallel tests are not supported on Windows.')
341
+ self.n_jobs = 1
342
+ elif self.n_jobs == 0:
343
+ # Decide automatically
344
+ # Divide by two to ignore hyper-threading
345
+ n_max_jobs = mp.cpu_count() // 2
346
+ if n_max_jobs == 0:
347
+ self.n_jobs = 1
348
+ else:
349
+ # Don't use more workers than the number of CPUs
350
+ self.n_jobs = min(n_max_jobs, self.n_systems)
351
+
352
+ self._signatures: Dict[str, Signature] = {}
353
+ self._baseline_info: Dict[str, Tuple[Any, Result]] = {}
354
+
355
+ ##################################################
356
+ # Pre-compute and cache baseline system statistics
357
+ ##################################################
358
+ self.metrics = {}
359
+
360
+ bl_name, bl_hyps = self.named_systems[0]
361
+
362
+ for name, metric in metrics.items():
363
+ sacrelogger.info(f'Pre-computing {name} statistics for {bl_name!r}')
364
+ bl_stats = metric._extract_corpus_statistics(bl_hyps, self.references)
365
+ bl_score = metric._aggregate_and_compute(bl_stats)
366
+
367
+ # Compute CI for the baseline here once
368
+ confidence_n = self.n_samples if self.test_type == 'bs' \
369
+ else self.n_ar_confidence
370
+
371
+ bl_mean, bl_ci = None, None
372
+ if confidence_n > 0:
373
+ _, bl_scores = _bootstrap_resample(bl_stats, metric, confidence_n)
374
+ bl_mean, bl_ci = estimate_ci(np.array([x.score for x in bl_scores]))
375
+
376
+ result = Result(bl_score.score, mean=bl_mean, ci=bl_ci)
377
+ # Use updated name for the metric
378
+ self._baseline_info[bl_score.name] = (bl_stats, result)
379
+ self.metrics[bl_score.name] = metric
380
+
381
+ # Update metric signature as well
382
+ sig = metric.get_signature()
383
+ sig.update('seed', str(self._seed).lower())
384
+
385
+ # Num samples for bs, num trials for AR
386
+ sig.update(self.test_type, self.n_samples)
387
+ if self.n_ar_confidence > 0:
388
+ # Bootstrap is used for AR CI as well
389
+ sig.update('bs', self.n_ar_confidence)
390
+ self._signatures[bl_score.name] = sig
391
+
392
+ def __call__(self) -> Tuple[Dict[str, Signature], Dict[str, List[Union[str, Result]]]]:
393
+ """Runs the paired test either on single or multiple worker processes."""
394
+ tasks = []
395
+ scores: Dict[str, List[Union[str, Result]]] = {}
396
+
397
+ # Add the name column
398
+ scores['System'] = [ns[0] for ns in self.named_systems]
399
+
400
+ # Store baseline results as the first position
401
+ for metric, (_, result) in self._baseline_info.items():
402
+ scores[metric] = [result]
403
+
404
+ # Prepare list of arguments for each comparison
405
+ # Skip the baseline (pos: 0)
406
+ for idx, (name, hyps) in enumerate(self.named_systems[1:]):
407
+ seed = self._seed if self._seed else None
408
+
409
+ tasks.append(
410
+ (self._baseline_info, name, hyps, self.references,
411
+ self.metrics, self.n_samples, self.n_ar_confidence, seed))
412
+
413
+ # Run the test(s)
414
+ if self.n_jobs == 1:
415
+ results = [self._fn(*args) for args in tasks]
416
+ else:
417
+ # NOTE: The overhead of worker creation is not negligible
418
+ # but if you have many systems and TER enabled, this significantly
419
+ # speeds up the test.
420
+ # NOTE: This only works on Linux/Mac OS X but not Windows. Windows only
421
+ # supports `spawn` backend which requires things to be called
422
+ # from within __main__.
423
+ sacrelogger.info(f'Launching {self.n_jobs} parallel workers.')
424
+ with mp.get_context('fork').Pool(self.n_jobs) as pool:
425
+ jobs = [pool.apply_async(self._fn, args) for args in tasks]
426
+
427
+ # wait for completion
428
+ results = [j.get() for j in jobs]
429
+
430
+ # Keep the order deterministic
431
+ for sys_name, sys_results in results:
432
+ for metric, _result in sys_results.items():
433
+ scores[metric].append(_result)
434
+
435
+ return self._signatures, scores
evalkit_eagle/lib/python3.10/site-packages/sacrebleu/utils.py ADDED
@@ -0,0 +1,639 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import itertools
2
+ import json
3
+ import os
4
+ import re
5
+ import sys
6
+ import gzip
7
+ import math
8
+ import hashlib
9
+ import logging
10
+ import portalocker
11
+ from collections import defaultdict
12
+ from typing import List, Optional, Sequence, Dict
13
+ from argparse import Namespace
14
+
15
+ from tabulate import tabulate
16
+ import colorama
17
+
18
+
19
+ # Where to store downloaded test sets.
20
+ # Define the environment variable $SACREBLEU, or use the default of ~/.sacrebleu.
21
+ #
22
+ # Querying for a HOME environment variable can result in None (e.g., on Windows)
23
+ # in which case the os.path.join() throws a TypeError. Using expanduser() is
24
+ # a safe way to get the user's home folder.
25
+ USERHOME = os.path.expanduser("~")
26
+ SACREBLEU_DIR = os.environ.get('SACREBLEU', os.path.join(USERHOME, '.sacrebleu'))
27
+
28
+ sacrelogger = logging.getLogger('sacrebleu')
29
+
30
+
31
+ class Color:
32
+ ENABLE_COLORS = True
33
+
34
+ @staticmethod
35
+ def format(msg: str, color: str) -> str:
36
+ """Returns a colored version of the given message string.
37
+
38
+ :param msg: The string to Color.format.
39
+ :param color: The color specifier i.e. 'red', 'blue', 'green', etc.
40
+ :return: A colored version of the string if the output is a terminal.
41
+ """
42
+ if not Color.ENABLE_COLORS:
43
+ return msg
44
+ _ansi_str = getattr(colorama.Fore, color.upper(), None)
45
+ if _ansi_str:
46
+ return f'{_ansi_str}{msg}{colorama.Style.RESET_ALL}'
47
+
48
+ return msg
49
+
50
+
51
+ def _format_score_lines(scores: dict,
52
+ width: int = 2,
53
+ multiline: bool = True) -> Dict[str, List[str]]:
54
+ """Formats the scores prior to tabulating them."""
55
+ new_scores = {'System': scores.pop('System')}
56
+ p_val_break_char = '\n' if multiline else ' '
57
+ is_bootstrap = False
58
+
59
+ def _color_p_value(p: float):
60
+ msg = f'(p = {p:.4f})'
61
+ if p > 0.05:
62
+ return Color.format(msg, 'red')
63
+ return msg + '*'
64
+
65
+ for metric, vals in scores.items():
66
+ new_vals = []
67
+
68
+ for result in vals:
69
+ if not isinstance(result, str):
70
+ # Format result instances
71
+ _str = f'{result.score:.{width}f}'
72
+ if result.mean is not None:
73
+ is_bootstrap = True
74
+ _str += f' ({result.mean:.{width}f} ± {result.ci:.{width}f})'
75
+ if result.p_value is not None:
76
+ _str += p_val_break_char + _color_p_value(result.p_value)
77
+ else:
78
+ # Already formatted in non paired-test mode
79
+ _str = result
80
+
81
+ new_vals.append(_str)
82
+
83
+ if is_bootstrap:
84
+ # Change titles
85
+ metric += ' (μ ± 95% CI)'
86
+
87
+ new_scores[metric] = new_vals
88
+
89
+ return new_scores
90
+
91
+
92
+ def print_results_table(results: dict, signatures: dict, args: Namespace):
93
+ """Prints out a nicely formatted table for multi-system evaluation mode."""
94
+
95
+ if args.format == 'json':
96
+ proper_json = []
97
+ dict_keys = list(results.keys())
98
+ for i in range(len(results['System'])):
99
+ value = {}
100
+ value['system'] = results['System'][i]
101
+ # parse metrics
102
+ for j in range(1, len(dict_keys)):
103
+ if isinstance(results[dict_keys[j]][i], str):
104
+ value[dict_keys[j]] = results[dict_keys[j]][i]
105
+ else:
106
+ # Values inside object as dict
107
+ value[dict_keys[j]] = results[dict_keys[j]][i].__dict__
108
+ proper_json.append(value)
109
+
110
+ print(json.dumps(proper_json, indent=4))
111
+ return
112
+
113
+ tablefmt = args.format
114
+ if tablefmt in ('text'):
115
+ tablefmt = 'fancy_grid'
116
+ elif tablefmt == 'latex':
117
+ # Use booktabs
118
+ tablefmt = 'latex_booktabs'
119
+
120
+ # If paired testing has been given, this'll format the score lines
121
+ results = _format_score_lines(
122
+ results, args.width, multiline=tablefmt == 'fancy_grid')
123
+
124
+ new_dict = {}
125
+
126
+ # Color the column names and the baseline system name and scores
127
+ has_baseline = False
128
+ baseline_name = ''
129
+ for name in results.keys():
130
+ val = results[name]
131
+ if val[0].startswith('Baseline:') or has_baseline:
132
+ if val[0].startswith('Baseline:'):
133
+ baseline_name = val[0]
134
+ has_baseline = True
135
+ val[0] = Color.format(val[0], 'yellow')
136
+ new_dict[Color.format(name, 'cyan')] = results[name]
137
+
138
+ # Finally tabulate
139
+ table = tabulate(
140
+ new_dict, headers='keys', tablefmt=tablefmt,
141
+ colalign=('right', ),
142
+ stralign='center',
143
+ numalign='center',
144
+ floatfmt=f'.{args.width}f')
145
+
146
+ print(table)
147
+ print()
148
+
149
+ is_paired = args.paired_bs or args.paired_ar
150
+
151
+ if is_paired:
152
+ test_type = 'bootstrap resampling' if args.paired_bs else 'approximate randomization'
153
+ n_samples_or_trials = args.paired_bs_n if args.paired_bs else args.paired_ar_n
154
+ test_sample_type = 'resampling trials' if args.paired_bs else 'trials'
155
+ msg = f'Paired {test_type} test with {n_samples_or_trials} {test_sample_type}'
156
+
157
+ bline = Color.format('baseline', 'yellow')
158
+ bline_name = Color.format(baseline_name, 'yellow')
159
+ null_hyp = Color.format('Null hypothesis', 'green')
160
+ pval_color = Color.format('highlighted in red', 'red')
161
+
162
+ # Print fancy header
163
+ print('-' * len(msg) + '\n' + msg + '\n' + '-' * len(msg))
164
+ print(f' - Each system is pairwise compared to {bline_name}.')
165
+ if args.paired_bs:
166
+ print(' Actual system score / bootstrap estimated true mean / 95% CI are provided for each metric.')
167
+ else:
168
+ print(' Actual system score is provided for each metric.')
169
+ print()
170
+ print(f' - {null_hyp}: the system and the {bline} translations are essentially')
171
+ print(f' generated by the same underlying process. For a given system and the {bline},')
172
+ print(' the p-value is roughly the probability of the absolute score difference (delta)')
173
+ print(f' or higher occurring due to chance, under the assumption that the {null_hyp.lower()} is correct.')
174
+ print()
175
+ print(f' - Assuming a significance threshold of 0.05, the {null_hyp.lower()} can be rejected')
176
+ print(' for p-values < 0.05 (marked with "*"). This means that the delta is unlikely to be attributed')
177
+ print(f' to chance, hence the system is significantly "different" than the {bline}.')
178
+ print(f' Otherwise, the p-values are {pval_color}.')
179
+ print()
180
+ print(f' - NOTE: Significance does not tell whether a system is "better" than the {bline} but rather')
181
+ print(' emphasizes the "difference" of the systems in terms of the replicability of the delta.')
182
+ print()
183
+
184
+ print('-----------------')
185
+ print('Metric signatures')
186
+ print('-----------------')
187
+ for name, sig in signatures.items():
188
+ print(f' - {name:<10} {sig}')
189
+
190
+
191
+ def print_single_results(results: List[str], args: Namespace):
192
+ """Re-process metric strings to align them nicely."""
193
+ if args.format == 'json':
194
+ if len(results) > 1:
195
+ proper_json = '[\n' + ',\n'.join(results) + '\n]'
196
+ print(proper_json)
197
+ else:
198
+ print(results[0])
199
+ return
200
+
201
+ # Color confidence strings for emphasis
202
+ if 'μ' in results[0]:
203
+ color_re = re.compile(r'(\(μ = [0-9\.]+ ± [0-9\.]+\))')
204
+ for idx in range(len(results)):
205
+ results[idx] = color_re.sub(
206
+ lambda m: Color.format(m.group(), 'cyan'), results[idx])
207
+
208
+ if len(results) == 1:
209
+ # Just one system, nothing to align.
210
+ print(results[0])
211
+ return
212
+
213
+ # Align by '=' character
214
+ lens = []
215
+ for line in results:
216
+ # If not score_only, split lines from '=' for re-alignment
217
+ try:
218
+ lens.append(line.index('=') - 1)
219
+ except ValueError:
220
+ print(line)
221
+
222
+ if len(lens) > 0:
223
+ w = max(lens)
224
+ for (_len, line) in zip(lens, results):
225
+ left, right = line[:_len], line[_len:]
226
+ print(f'{left:>{w}}{right}')
227
+
228
+
229
+ def sanity_check_lengths(system: Sequence[str],
230
+ refs: Sequence[Sequence[str]],
231
+ test_set: Optional[str] = None):
232
+ n_hyps = len(system)
233
+ if any(len(ref_stream) != n_hyps for ref_stream in refs):
234
+ sacrelogger.error("System and reference streams have different lengths.")
235
+ if test_set:
236
+ sacrelogger.error("This could be an issue with your system output "
237
+ "or with sacreBLEU's reference database if -t is given.")
238
+ sacrelogger.error("For the latter, try cleaning out the cache by typing:\n")
239
+ sacrelogger.error(f" rm -r {SACREBLEU_DIR}/{test_set}\n")
240
+ sacrelogger.error("The test sets will be re-downloaded the next time you run sacreBLEU.")
241
+ sys.exit(1)
242
+
243
+
244
+ def smart_open(file, mode='rt', encoding='utf-8'):
245
+ """Convenience function for reading compressed or plain text files.
246
+ :param file: The file to read.
247
+ :param mode: The file mode (read, write).
248
+ :param encoding: The file encoding.
249
+ """
250
+ if file.endswith('.gz'):
251
+ return gzip.open(file, mode=mode, encoding=encoding, newline="\n")
252
+ return open(file, mode=mode, encoding=encoding, newline="\n")
253
+
254
+
255
+ def my_log(num: float) -> float:
256
+ """
257
+ Floors the log function
258
+
259
+ :param num: the number
260
+ :return: log(num) floored to a very low number
261
+ """
262
+
263
+ if num == 0.0:
264
+ return -9999999999
265
+ return math.log(num)
266
+
267
+
268
+ def sum_of_lists(lists):
269
+ """Aggregates list of numeric lists by summing."""
270
+ if len(lists) == 1:
271
+ return lists[0]
272
+
273
+ # Preserve datatype
274
+ size = len(lists[0])
275
+ init_val = type(lists[0][0])(0.0)
276
+ total = [init_val] * size
277
+ for ll in lists:
278
+ for i in range(size):
279
+ total[i] += ll[i]
280
+ return total
281
+
282
+
283
+ def args_to_dict(args, prefix: str, strip_prefix: bool = False):
284
+ """Filters argparse's `Namespace` into dictionary with arguments
285
+ beginning with the given prefix."""
286
+ prefix += '_'
287
+ d = {}
288
+ for k, v in args.__dict__.items():
289
+ if k.startswith(prefix):
290
+ k = k.replace(prefix, '') if strip_prefix else k
291
+ d[k] = v
292
+ return d
293
+
294
+
295
+ def print_test_set(test_set, langpair, requested_fields, origlang=None, subset=None):
296
+ """Prints to STDOUT the specified side of the specified test set.
297
+
298
+ :param test_set: the test set to print
299
+ :param langpair: the language pair
300
+ :param requested_fields: the fields to print
301
+ :param origlang: print only sentences with a given original language (2-char ISO639-1 code), "non-" prefix means negation
302
+ :param subset: print only sentences whose document annotation matches a given regex
303
+ """
304
+ if test_set not in DATASETS:
305
+ raise Exception(f"No such test set {test_set}")
306
+
307
+ fieldnames = DATASETS[test_set].fieldnames(langpair)
308
+ all_files = DATASETS[test_set].get_files(langpair)
309
+
310
+ if "all" in requested_fields and len(requested_fields) != 1:
311
+ sacrelogger.error("Cannot use --echo all with other fields")
312
+ sys.exit(1)
313
+ elif "all" in requested_fields:
314
+ requested_fields = fieldnames
315
+
316
+ # backwards compatibility: allow "ref" even if not present (choose first)
317
+ if "ref" in requested_fields and "ref" not in fieldnames:
318
+ replacement_ref = min([f for f in fieldnames if f.startswith("ref")])
319
+ requested_fields = [f if f != "ref" else replacement_ref for f in requested_fields]
320
+
321
+ files = []
322
+ for field in requested_fields:
323
+ if field not in fieldnames:
324
+ sacrelogger.error(f"No such field {field} in test set {test_set} for language pair {langpair}.")
325
+ sacrelogger.error(f"available fields for {test_set}/{langpair}: {', '.join(fieldnames)}")
326
+ if "ref" not in fieldnames:
327
+ subref = min([f for f in fieldnames if f.startswith("ref")])
328
+ sacrelogger.error(f"'ref' also allowed for backwards compatibility (will return {subref})")
329
+ sys.exit(1)
330
+ index = fieldnames.index(field)
331
+ files.append(all_files[index])
332
+
333
+ streams = [smart_open(file) for file in files]
334
+ streams = filter_subset(streams, test_set, langpair, origlang, subset)
335
+ for lines in zip(*streams):
336
+ print('\t'.join(map(lambda x: x.rstrip(), lines)))
337
+
338
+
339
+ def get_source_file(test_set: str, langpair: str) -> str:
340
+ """
341
+ Returns the source file for a given testset/langpair.
342
+ Downloads it first if it is not already local.
343
+
344
+ :param test_set: The test set (e.g., "wmt19")
345
+ :param langpair: The language pair (e.g., "de-en")
346
+ :return: the path to the requested source file
347
+ """
348
+ if test_set not in DATASETS:
349
+ raise Exception(f"No such test set {test_set}")
350
+
351
+ return DATASETS[test_set].get_source_file(langpair)
352
+
353
+
354
+ def get_reference_files(test_set: str, langpair: str) -> List[str]:
355
+ """
356
+ Returns a list of one or more reference file paths for the given testset/langpair.
357
+ Downloads the references first if they are not already local.
358
+
359
+ :param test_set: The test set (e.g., "wmt19")
360
+ :param langpair: The language pair (e.g., "de-en")
361
+ :return: a list of one or more reference file paths
362
+ """
363
+ if test_set not in DATASETS:
364
+ raise Exception(f"No such test set {test_set}")
365
+ return DATASETS[test_set].get_reference_files(langpair)
366
+
367
+
368
+ def get_files(test_set, langpair) -> List[str]:
369
+ """
370
+ Returns the path of the source file and all reference files for
371
+ the provided test set / language pair.
372
+ Downloads the references first if they are not already local.
373
+
374
+ :param test_set: The test set (e.g., "wmt19")
375
+ :param langpair: The language pair (e.g., "de-en")
376
+ :return: a list of the source file and all reference files
377
+ """
378
+
379
+ if test_set not in DATASETS:
380
+ raise Exception(f"No such test set {test_set}")
381
+ return DATASETS[test_set].get_files(langpair)
382
+
383
+
384
+ def extract_tarball(filepath, destdir):
385
+ sacrelogger.info(f'Extracting {filepath} to {destdir}')
386
+ if filepath.endswith('.tar.gz') or filepath.endswith('.tgz'):
387
+ import tarfile
388
+ with tarfile.open(filepath) as tar:
389
+ tar.extractall(path=destdir)
390
+ elif filepath.endswith('.zip'):
391
+ import zipfile
392
+ with zipfile.ZipFile(filepath, 'r') as zipfile:
393
+ zipfile.extractall(path=destdir)
394
+
395
+
396
+ def get_md5sum(dest_path):
397
+ # Check md5sum
398
+ md5 = hashlib.md5()
399
+ with open(dest_path, 'rb') as infile:
400
+ for line in infile:
401
+ md5.update(line)
402
+ return md5.hexdigest()
403
+
404
+
405
+ def download_file(source_path, dest_path, extract_to=None, expected_md5=None):
406
+ """Downloading utility.
407
+
408
+ Downloads the specified test to the system location specified by the SACREBLEU environment variable.
409
+
410
+ :param source_path: the remote uri to download
411
+ :param dest_path: where to save the file
412
+ :param extract_to: for tarballs, where to extract to
413
+ :param expected_md5: the MD5 sum
414
+ :return: the set of processed file names
415
+ """
416
+ import urllib.request
417
+ import ssl
418
+
419
+ outdir = os.path.dirname(dest_path)
420
+ os.makedirs(outdir, exist_ok=True)
421
+
422
+ # Make sure to open in mode "a"
423
+ lockfile = f"{dest_path}.lock"
424
+ with portalocker.Lock(lockfile, timeout=60):
425
+
426
+ if not os.path.exists(dest_path) or os.path.getsize(dest_path) == 0:
427
+ sacrelogger.info(f"Downloading {source_path} to {dest_path}")
428
+
429
+ try:
430
+ with urllib.request.urlopen(source_path) as f, open(dest_path, 'wb') as out:
431
+ out.write(f.read())
432
+ except ssl.SSLError:
433
+ sacrelogger.error('An SSL error was encountered in downloading the files. If you\'re on a Mac, '
434
+ 'you may need to run the "Install Certificates.command" file located in the '
435
+ '"Python 3" folder, often found under /Applications')
436
+ sys.exit(1)
437
+
438
+ if expected_md5 is not None:
439
+ cur_md5 = get_md5sum(dest_path)
440
+ if cur_md5 != expected_md5:
441
+ sacrelogger.error(f'Fatal: MD5 sum of downloaded file was incorrect (got {cur_md5}, expected {expected_md5}).')
442
+ sacrelogger.error(f'Please manually delete {dest_path!r} and rerun the command.')
443
+ sacrelogger.error('If the problem persists, the tarball may have changed, in which case, please contact the SacreBLEU maintainer.')
444
+ sys.exit(1)
445
+
446
+ # Extract the tarball
447
+ if extract_to is not None:
448
+ extract_tarball(dest_path, extract_to)
449
+
450
+
451
+ def download_test_set(test_set, langpair=None):
452
+ """Downloads the specified test to the system location specified by the SACREBLEU environment variable.
453
+
454
+ :param test_set: the test set to download
455
+ :param langpair: the language pair (needed for some datasets)
456
+ :return: the set of processed file names
457
+ """
458
+ if test_set not in DATASETS:
459
+ raise Exception(f"No such test set {test_set}")
460
+ dataset = DATASETS[test_set]
461
+ file_paths = dataset.get_files(langpair)
462
+ return file_paths
463
+
464
+
465
+ def get_langpairs_for_testset(testset: str) -> List[str]:
466
+ """Return a list of language pairs for a given test set."""
467
+ if testset not in DATASETS:
468
+ return []
469
+ return list(DATASETS[testset].langpairs.keys())
470
+
471
+
472
+ def get_available_testsets() -> List[str]:
473
+ """Return a list of available test sets."""
474
+ return sorted(DATASETS.keys(), reverse=True)
475
+
476
+ def get_available_testsets_for_langpair(langpair: str) -> List[str]:
477
+ """Return a list of available test sets for a given language pair"""
478
+ parts = langpair.split('-')
479
+ srclang = parts[0]
480
+ trglang = parts[1]
481
+
482
+ testsets = []
483
+ for dataset in DATASETS.values():
484
+ if f'{srclang}-{trglang}' in dataset.langpairs \
485
+ or f'{trglang}-{srclang}' in dataset.langpairs:
486
+ testsets.append(dataset.name)
487
+
488
+ return testsets
489
+
490
+
491
+ def get_available_origlangs(test_sets, langpair) -> List[str]:
492
+ """Return a list of origlang values according to the raw XML/SGM files."""
493
+ if test_sets is None:
494
+ return []
495
+
496
+ origlangs = set()
497
+ for test_set in test_sets.split(','):
498
+ dataset = DATASETS[test_set]
499
+ rawfile = os.path.join(SACREBLEU_DIR, test_set, 'raw', dataset.langpairs[langpair][0])
500
+ from .dataset.wmt_xml import WMTXMLDataset
501
+ if isinstance(dataset, WMTXMLDataset):
502
+ for origlang in dataset._unwrap_wmt21_or_later(rawfile)['origlang']:
503
+ origlangs.add(origlang)
504
+ if rawfile.endswith('.sgm'):
505
+ with smart_open(rawfile) as fin:
506
+ for line in fin:
507
+ if line.startswith('<doc '):
508
+ doc_origlang = re.sub(r'.* origlang="([^"]+)".*\n', '\\1', line)
509
+ origlangs.add(doc_origlang)
510
+ return sorted(list(origlangs))
511
+
512
+
513
+ def get_available_subsets(test_sets, langpair) -> List[str]:
514
+ """Return a list of domain values according to the raw XML files and domain/country values from the SGM files."""
515
+ if test_sets is None:
516
+ return []
517
+
518
+ subsets = set()
519
+ for test_set in test_sets.split(','):
520
+ dataset = DATASETS[test_set]
521
+ from .dataset.wmt_xml import WMTXMLDataset
522
+ if isinstance(dataset, WMTXMLDataset):
523
+ rawfile = os.path.join(SACREBLEU_DIR, test_set, 'raw', dataset.langpairs[langpair][0])
524
+ fields = dataset._unwrap_wmt21_or_later(rawfile)
525
+ if 'domain' in fields:
526
+ subsets |= set(fields['domain'])
527
+ elif test_set in SUBSETS:
528
+ subsets |= set("country:" + v.split("-")[0] for v in SUBSETS[test_set].values())
529
+ subsets |= set(v.split("-")[1] for v in SUBSETS[test_set].values())
530
+ return sorted(list(subsets))
531
+
532
+ def filter_subset(systems, test_sets, langpair, origlang, subset=None):
533
+ """Filter sentences with a given origlang (or subset) according to the raw SGM files."""
534
+ if origlang is None and subset is None:
535
+ return systems
536
+ if test_sets is None or langpair is None:
537
+ raise ValueError('Filtering for --origlang or --subset needs a test (-t) and a language pair (-l).')
538
+
539
+ if subset is not None and subset.startswith('country:'):
540
+ subset = subset[8:]
541
+
542
+ re_origlang = re.compile(r'.* origlang="([^"]+)".*\n')
543
+ re_id = re.compile(r'.* docid="([^"]+)".*\n')
544
+
545
+ indices_to_keep = []
546
+ for test_set in test_sets.split(','):
547
+ dataset = DATASETS[test_set]
548
+ rawfile = os.path.join(SACREBLEU_DIR, test_set, 'raw', dataset.langpairs[langpair][0])
549
+ from .dataset.wmt_xml import WMTXMLDataset
550
+ if isinstance(dataset, WMTXMLDataset):
551
+ fields = dataset._unwrap_wmt21_or_later(rawfile)
552
+ domains = fields['domain'] if 'domain' in fields else itertools.repeat(None)
553
+ for doc_origlang, doc_domain in zip(fields['origlang'], domains):
554
+ if origlang is None:
555
+ include_doc = True
556
+ else:
557
+ if origlang.startswith('non-'):
558
+ include_doc = doc_origlang != origlang[4:]
559
+ else:
560
+ include_doc = doc_origlang == origlang
561
+ if subset is not None and (doc_domain is None or not re.search(subset, doc_domain)):
562
+ include_doc = False
563
+ indices_to_keep.append(include_doc)
564
+ elif rawfile.endswith('.sgm'):
565
+ doc_to_tags = {}
566
+ if subset is not None:
567
+ if test_set not in SUBSETS:
568
+ raise Exception('No subset annotation available for test set ' + test_set)
569
+ doc_to_tags = SUBSETS[test_set]
570
+ with smart_open(rawfile) as fin:
571
+ include_doc = False
572
+ for line in fin:
573
+ if line.startswith('<doc '):
574
+ if origlang is None:
575
+ include_doc = True
576
+ else:
577
+ doc_origlang = re_origlang.sub(r'\1', line)
578
+ if origlang.startswith('non-'):
579
+ include_doc = doc_origlang != origlang[4:]
580
+ else:
581
+ include_doc = doc_origlang == origlang
582
+
583
+ if subset is not None:
584
+ doc_id = re_id.sub(r'\1', line)
585
+ if not re.search(subset, doc_to_tags.get(doc_id, '')):
586
+ include_doc = False
587
+ if line.startswith('<seg '):
588
+ indices_to_keep.append(include_doc)
589
+ else:
590
+ raise Exception(f'--origlang and --subset supports only WMT *.xml and *.sgm files, not {rawfile!r}')
591
+ return [[sentence for sentence, keep in zip(sys, indices_to_keep) if keep] for sys in systems]
592
+
593
+
594
+ def print_subset_results(metrics, full_system, full_refs, args):
595
+ w = args.width
596
+ origlangs = args.origlang if args.origlang else \
597
+ get_available_origlangs(args.test_set, args.langpair)
598
+
599
+ if len(origlangs) == 0:
600
+ print('No subset information found. Consider using --origlang argument.')
601
+ return
602
+
603
+ results = defaultdict(list)
604
+
605
+ for origlang in origlangs:
606
+ subsets = [None]
607
+ if args.subset is not None:
608
+ subsets += [args.subset]
609
+ else:
610
+ subsets += get_available_subsets(args.test_set, args.langpair)
611
+
612
+ for subset in subsets:
613
+ system, *refs = filter_subset(
614
+ [full_system, *full_refs], args.test_set, args.langpair, origlang, subset)
615
+
616
+ if len(system) == 0:
617
+ continue
618
+
619
+ key = f'origlang={origlang}'
620
+ if subset is None:
621
+ key += ' domain=ALL'
622
+ elif subset.startswith('country:'):
623
+ key += f' country={subset[8:]}'
624
+ else:
625
+ key += f' domain={subset}'
626
+
627
+ for metric in metrics.values():
628
+ score = metric.corpus_score(system, refs)
629
+ results[key].append((len(system), score))
630
+
631
+ max_left_width = max([len(k) for k in results.keys()]) + 1
632
+ max_metric_width = max([len(val[1].name) for val in list(results.values())[0]])
633
+ for key, scores in results.items():
634
+ key = Color.format(f'{key:<{max_left_width}}', 'yellow')
635
+ for n_system, score in scores:
636
+ print(f'{key}: sentences={n_system:<6} {score.name:<{max_metric_width}} = {score.score:.{w}f}')
637
+
638
+ # import at the end to avoid circular import
639
+ from .dataset import DATASETS, SUBSETS # noqa: E402
evalkit_eagle/lib/python3.10/site-packages/sacrebleu/version.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # file generated by setuptools_scm
2
+ # don't change, don't track in version control
3
+ TYPE_CHECKING = False
4
+ if TYPE_CHECKING:
5
+ from typing import Tuple, Union
6
+ VERSION_TUPLE = Tuple[Union[int, str], ...]
7
+ else:
8
+ VERSION_TUPLE = object
9
+
10
+ version: str
11
+ __version__: str
12
+ __version_tuple__: VERSION_TUPLE
13
+ version_tuple: VERSION_TUPLE
14
+
15
+ __version__ = version = '2.5.1'
16
+ __version_tuple__ = version_tuple = (2, 5, 1)
janus/share/terminfo/q/qansi-t ADDED
Binary file (2.01 kB). View file
 
janus/share/terminfo/q/qnx ADDED
Binary file (1.37 kB). View file
 
janus/share/terminfo/q/qnxt ADDED
Binary file (1.37 kB). View file
 
janus/share/terminfo/q/qnxt4 ADDED
Binary file (1.37 kB). View file
 
janus/share/terminfo/q/qvt108 ADDED
Binary file (584 Bytes). View file
 
janus/share/terminfo/q/qvt119+-w ADDED
Binary file (598 Bytes). View file
 
janus/share/terminfo/q/qvt119-w ADDED
Binary file (598 Bytes). View file
 
janus/share/terminfo/q/qvt119p ADDED
Binary file (585 Bytes). View file
 
janus/share/terminfo/q/qvt203+ ADDED
Binary file (855 Bytes). View file
 
janus/share/terminfo/x/x1700 ADDED
Binary file (429 Bytes). View file
 
janus/share/terminfo/x/xerox820 ADDED
Binary file (355 Bytes). View file
 
janus/share/terminfo/x/xnuppc+112x37 ADDED
Binary file (88 Bytes). View file
 
janus/share/terminfo/x/xnuppc+144x48 ADDED
Binary file (88 Bytes). View file
 
janus/share/terminfo/x/xnuppc+f2 ADDED
Binary file (1.02 kB). View file
 
janus/share/terminfo/x/xnuppc-100x37-m ADDED
Binary file (987 Bytes). View file
 
janus/share/terminfo/x/xnuppc-112x37 ADDED
Binary file (1.22 kB). View file
 
janus/share/terminfo/x/xnuppc-128x40 ADDED
Binary file (1.22 kB). View file
 
janus/share/terminfo/x/xnuppc-128x48-m ADDED
Binary file (987 Bytes). View file
 
janus/share/terminfo/x/xnuppc-144x48-m ADDED
Binary file (987 Bytes). View file
 
janus/share/terminfo/x/xnuppc-160x64 ADDED
Binary file (1.22 kB). View file
 
janus/share/terminfo/x/xnuppc-160x64-m ADDED
Binary file (987 Bytes). View file
 
janus/share/terminfo/x/xnuppc-80x30 ADDED
Binary file (1.21 kB). View file
 
janus/share/terminfo/x/xnuppc-90x30-m ADDED
Binary file (985 Bytes). View file
 
janus/share/terminfo/x/xnuppc-b ADDED
Binary file (1.22 kB). View file
 
janus/share/terminfo/x/xnuppc-f ADDED
Binary file (1.23 kB). View file
 
janus/share/terminfo/x/xnuppc-m ADDED
Binary file (965 Bytes). View file
 
janus/share/terminfo/x/xnuppc-m-b ADDED
Binary file (1.01 kB). View file
 
janus/share/terminfo/x/xterm+88color2 ADDED
Binary file (1.06 kB). View file
 
janus/share/terminfo/x/xterm+alt1049 ADDED
Binary file (144 Bytes). View file
 
janus/share/terminfo/x/xterm+alt47 ADDED
Binary file (152 Bytes). View file
 
janus/share/terminfo/x/xterm+app ADDED
Binary file (422 Bytes). View file
 
janus/share/terminfo/x/xterm+direct ADDED
Binary file (1.06 kB). View file
 
janus/share/terminfo/x/xterm+direct16 ADDED
Binary file (1.13 kB). View file
 
janus/share/terminfo/x/xterm+direct256 ADDED
Binary file (1.19 kB). View file
 
janus/share/terminfo/x/xterm+keypad ADDED
Binary file (612 Bytes). View file
 
janus/share/terminfo/x/xterm+meta ADDED
Binary file (276 Bytes). View file
 
janus/share/terminfo/x/xterm+nofkeys ADDED
Binary file (2.35 kB). View file