ZTWHHH commited on
Commit
f7a4797
·
verified ·
1 Parent(s): 23307a8

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. minigpt2/lib/libcrypto.so +3 -0
  3. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__init__.py +165 -0
  4. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/__init__.cpython-310.pyc +0 -0
  5. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/audio_classification.cpython-310.pyc +0 -0
  6. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/automatic_speech_recognition.cpython-310.pyc +0 -0
  7. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/chat_completion.cpython-310.pyc +0 -0
  8. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/document_question_answering.cpython-310.pyc +0 -0
  9. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/fill_mask.cpython-310.pyc +0 -0
  10. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/image_classification.cpython-310.pyc +0 -0
  11. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/image_segmentation.cpython-310.pyc +0 -0
  12. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/image_to_image.cpython-310.pyc +0 -0
  13. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/image_to_text.cpython-310.pyc +0 -0
  14. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/sentence_similarity.cpython-310.pyc +0 -0
  15. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/summarization.cpython-310.pyc +0 -0
  16. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/text2text_generation.cpython-310.pyc +0 -0
  17. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/text_classification.cpython-310.pyc +0 -0
  18. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/text_generation.cpython-310.pyc +0 -0
  19. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/text_to_audio.cpython-310.pyc +0 -0
  20. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/text_to_image.cpython-310.pyc +0 -0
  21. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/video_classification.cpython-310.pyc +0 -0
  22. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/visual_question_answering.cpython-310.pyc +0 -0
  23. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/zero_shot_classification.cpython-310.pyc +0 -0
  24. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/audio_classification.py +46 -0
  25. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/document_question_answering.py +85 -0
  26. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/feature_extraction.py +37 -0
  27. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/image_classification.py +46 -0
  28. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/summarization.py +44 -0
  29. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/table_question_answering.py +45 -0
  30. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/text_classification.py +48 -0
  31. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/text_generation.py +169 -0
  32. minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/zero_shot_object_detection.py +55 -0
  33. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__init__.py +110 -0
  34. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_auth.cpython-310.pyc +0 -0
  35. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_cache_assets.cpython-310.pyc +0 -0
  36. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_cache_manager.cpython-310.pyc +0 -0
  37. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_chunk_utils.cpython-310.pyc +0 -0
  38. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_fixes.cpython-310.pyc +0 -0
  39. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_git_credential.cpython-310.pyc +0 -0
  40. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_headers.cpython-310.pyc +0 -0
  41. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_hf_folder.cpython-310.pyc +0 -0
  42. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_lfs.cpython-310.pyc +0 -0
  43. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_paths.cpython-310.pyc +0 -0
  44. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_safetensors.cpython-310.pyc +0 -0
  45. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_subprocess.cpython-310.pyc +0 -0
  46. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_telemetry.cpython-310.pyc +0 -0
  47. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_typing.cpython-310.pyc +0 -0
  48. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_validators.cpython-310.pyc +0 -0
  49. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/insecure_hashlib.cpython-310.pyc +0 -0
  50. minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/sha.cpython-310.pyc +0 -0
.gitattributes CHANGED
@@ -1330,3 +1330,4 @@ videochat2/lib/python3.10/site-packages/scipy/stats/_boost/hypergeom_ufunc.cpyth
1330
  videochat2/lib/python3.10/site-packages/scipy/stats/_boost/invgauss_ufunc.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
1331
  videochat2/lib/python3.10/site-packages/scipy/stats/_boost/binom_ufunc.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
1332
  videochat2/lib/python3.10/site-packages/matplotlib/__pycache__/widgets.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
 
 
1330
  videochat2/lib/python3.10/site-packages/scipy/stats/_boost/invgauss_ufunc.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
1331
  videochat2/lib/python3.10/site-packages/scipy/stats/_boost/binom_ufunc.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
1332
  videochat2/lib/python3.10/site-packages/matplotlib/__pycache__/widgets.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1333
+ minigpt2/lib/libcrypto.so filter=lfs diff=lfs merge=lfs -text
minigpt2/lib/libcrypto.so ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b52e9a42b9d44156d54c7c9b7dc7e14df91ea6b6865467d1ad1e9bdaa624b2a9
3
+ size 5172040
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__init__.py ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This file is auto-generated by `utils/generate_inference_types.py`.
2
+ # Do not modify it manually.
3
+ #
4
+ # ruff: noqa: F401
5
+
6
+ from .audio_classification import (
7
+ AudioClassificationInput,
8
+ AudioClassificationOutputElement,
9
+ AudioClassificationOutputTransform,
10
+ AudioClassificationParameters,
11
+ )
12
+ from .audio_to_audio import AudioToAudioInput, AudioToAudioOutputElement
13
+ from .automatic_speech_recognition import (
14
+ AutomaticSpeechRecognitionEarlyStoppingEnum,
15
+ AutomaticSpeechRecognitionGenerationParameters,
16
+ AutomaticSpeechRecognitionInput,
17
+ AutomaticSpeechRecognitionOutput,
18
+ AutomaticSpeechRecognitionOutputChunk,
19
+ AutomaticSpeechRecognitionParameters,
20
+ )
21
+ from .base import BaseInferenceType
22
+ from .chat_completion import (
23
+ ChatCompletionInput,
24
+ ChatCompletionInputFunctionDefinition,
25
+ ChatCompletionInputFunctionName,
26
+ ChatCompletionInputGrammarType,
27
+ ChatCompletionInputMessage,
28
+ ChatCompletionInputMessageChunk,
29
+ ChatCompletionInputStreamOptions,
30
+ ChatCompletionInputToolType,
31
+ ChatCompletionInputURL,
32
+ ChatCompletionOutput,
33
+ ChatCompletionOutputComplete,
34
+ ChatCompletionOutputFunctionDefinition,
35
+ ChatCompletionOutputLogprob,
36
+ ChatCompletionOutputLogprobs,
37
+ ChatCompletionOutputMessage,
38
+ ChatCompletionOutputToolCall,
39
+ ChatCompletionOutputTopLogprob,
40
+ ChatCompletionOutputUsage,
41
+ ChatCompletionStreamOutput,
42
+ ChatCompletionStreamOutputChoice,
43
+ ChatCompletionStreamOutputDelta,
44
+ ChatCompletionStreamOutputDeltaToolCall,
45
+ ChatCompletionStreamOutputFunction,
46
+ ChatCompletionStreamOutputLogprob,
47
+ ChatCompletionStreamOutputLogprobs,
48
+ ChatCompletionStreamOutputTopLogprob,
49
+ ChatCompletionStreamOutputUsage,
50
+ ToolElement,
51
+ )
52
+ from .depth_estimation import DepthEstimationInput, DepthEstimationOutput
53
+ from .document_question_answering import (
54
+ DocumentQuestionAnsweringInput,
55
+ DocumentQuestionAnsweringInputData,
56
+ DocumentQuestionAnsweringOutputElement,
57
+ DocumentQuestionAnsweringParameters,
58
+ )
59
+ from .feature_extraction import FeatureExtractionInput
60
+ from .fill_mask import FillMaskInput, FillMaskOutputElement, FillMaskParameters
61
+ from .image_classification import (
62
+ ImageClassificationInput,
63
+ ImageClassificationOutputElement,
64
+ ImageClassificationOutputTransform,
65
+ ImageClassificationParameters,
66
+ )
67
+ from .image_segmentation import ImageSegmentationInput, ImageSegmentationOutputElement, ImageSegmentationParameters
68
+ from .image_to_image import ImageToImageInput, ImageToImageOutput, ImageToImageParameters, ImageToImageTargetSize
69
+ from .image_to_text import (
70
+ ImageToTextEarlyStoppingEnum,
71
+ ImageToTextGenerationParameters,
72
+ ImageToTextInput,
73
+ ImageToTextOutput,
74
+ ImageToTextParameters,
75
+ )
76
+ from .object_detection import (
77
+ ObjectDetectionBoundingBox,
78
+ ObjectDetectionInput,
79
+ ObjectDetectionOutputElement,
80
+ ObjectDetectionParameters,
81
+ )
82
+ from .question_answering import (
83
+ QuestionAnsweringInput,
84
+ QuestionAnsweringInputData,
85
+ QuestionAnsweringOutputElement,
86
+ QuestionAnsweringParameters,
87
+ )
88
+ from .sentence_similarity import SentenceSimilarityInput, SentenceSimilarityInputData
89
+ from .summarization import SummarizationInput, SummarizationOutput, SummarizationParameters
90
+ from .table_question_answering import (
91
+ TableQuestionAnsweringInput,
92
+ TableQuestionAnsweringInputData,
93
+ TableQuestionAnsweringOutputElement,
94
+ )
95
+ from .text2text_generation import Text2TextGenerationInput, Text2TextGenerationOutput, Text2TextGenerationParameters
96
+ from .text_classification import (
97
+ TextClassificationInput,
98
+ TextClassificationOutputElement,
99
+ TextClassificationOutputTransform,
100
+ TextClassificationParameters,
101
+ )
102
+ from .text_generation import (
103
+ TextGenerationInput,
104
+ TextGenerationInputGenerateParameters,
105
+ TextGenerationInputGrammarType,
106
+ TextGenerationOutput,
107
+ TextGenerationOutputBestOfSequence,
108
+ TextGenerationOutputDetails,
109
+ TextGenerationOutputPrefillToken,
110
+ TextGenerationOutputToken,
111
+ TextGenerationStreamOutput,
112
+ TextGenerationStreamOutputStreamDetails,
113
+ TextGenerationStreamOutputToken,
114
+ )
115
+ from .text_to_audio import (
116
+ TextToAudioEarlyStoppingEnum,
117
+ TextToAudioGenerationParameters,
118
+ TextToAudioInput,
119
+ TextToAudioOutput,
120
+ TextToAudioParameters,
121
+ )
122
+ from .text_to_image import TextToImageInput, TextToImageOutput, TextToImageParameters, TextToImageTargetSize
123
+ from .text_to_speech import (
124
+ TextToSpeechEarlyStoppingEnum,
125
+ TextToSpeechGenerationParameters,
126
+ TextToSpeechInput,
127
+ TextToSpeechOutput,
128
+ TextToSpeechParameters,
129
+ )
130
+ from .token_classification import (
131
+ TokenClassificationInput,
132
+ TokenClassificationOutputElement,
133
+ TokenClassificationParameters,
134
+ )
135
+ from .translation import TranslationInput, TranslationOutput, TranslationParameters
136
+ from .video_classification import (
137
+ VideoClassificationInput,
138
+ VideoClassificationOutputElement,
139
+ VideoClassificationOutputTransform,
140
+ VideoClassificationParameters,
141
+ )
142
+ from .visual_question_answering import (
143
+ VisualQuestionAnsweringInput,
144
+ VisualQuestionAnsweringInputData,
145
+ VisualQuestionAnsweringOutputElement,
146
+ VisualQuestionAnsweringParameters,
147
+ )
148
+ from .zero_shot_classification import (
149
+ ZeroShotClassificationInput,
150
+ ZeroShotClassificationInputData,
151
+ ZeroShotClassificationOutputElement,
152
+ ZeroShotClassificationParameters,
153
+ )
154
+ from .zero_shot_image_classification import (
155
+ ZeroShotImageClassificationInput,
156
+ ZeroShotImageClassificationInputData,
157
+ ZeroShotImageClassificationOutputElement,
158
+ ZeroShotImageClassificationParameters,
159
+ )
160
+ from .zero_shot_object_detection import (
161
+ ZeroShotObjectDetectionBoundingBox,
162
+ ZeroShotObjectDetectionInput,
163
+ ZeroShotObjectDetectionInputData,
164
+ ZeroShotObjectDetectionOutputElement,
165
+ )
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (6.61 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/audio_classification.cpython-310.pyc ADDED
Binary file (1.42 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/automatic_speech_recognition.cpython-310.pyc ADDED
Binary file (2.75 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/chat_completion.cpython-310.pyc ADDED
Binary file (8.18 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/document_question_answering.cpython-310.pyc ADDED
Binary file (2.07 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/fill_mask.cpython-310.pyc ADDED
Binary file (1.4 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/image_classification.cpython-310.pyc ADDED
Binary file (1.43 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/image_segmentation.cpython-310.pyc ADDED
Binary file (1.58 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/image_to_image.cpython-310.pyc ADDED
Binary file (1.66 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/image_to_text.cpython-310.pyc ADDED
Binary file (2.38 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/sentence_similarity.cpython-310.pyc ADDED
Binary file (964 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/summarization.cpython-310.pyc ADDED
Binary file (1.5 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/text2text_generation.cpython-310.pyc ADDED
Binary file (1.63 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/text_classification.cpython-310.pyc ADDED
Binary file (1.39 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/text_generation.cpython-310.pyc ADDED
Binary file (4.88 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/text_to_audio.cpython-310.pyc ADDED
Binary file (2.36 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/text_to_image.cpython-310.pyc ADDED
Binary file (1.72 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/video_classification.cpython-310.pyc ADDED
Binary file (1.53 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/visual_question_answering.cpython-310.pyc ADDED
Binary file (1.64 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/__pycache__/zero_shot_classification.cpython-310.pyc ADDED
Binary file (1.68 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/audio_classification.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inference code generated from the JSON schema spec in @huggingface/tasks.
2
+ #
3
+ # See:
4
+ # - script: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-codegen.ts
5
+ # - specs: https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks.
6
+ from dataclasses import dataclass
7
+ from typing import Literal, Optional
8
+
9
+ from .base import BaseInferenceType
10
+
11
+
12
+ AudioClassificationOutputTransform = Literal["sigmoid", "softmax", "none"]
13
+
14
+
15
+ @dataclass
16
+ class AudioClassificationParameters(BaseInferenceType):
17
+ """Additional inference parameters
18
+ Additional inference parameters for Audio Classification
19
+ """
20
+
21
+ function_to_apply: Optional["AudioClassificationOutputTransform"] = None
22
+ """The function to apply to the output."""
23
+ top_k: Optional[int] = None
24
+ """When specified, limits the output to the top K most probable classes."""
25
+
26
+
27
+ @dataclass
28
+ class AudioClassificationInput(BaseInferenceType):
29
+ """Inputs for Audio Classification inference"""
30
+
31
+ inputs: str
32
+ """The input audio data as a base64-encoded string. If no `parameters` are provided, you can
33
+ also provide the audio data as a raw bytes payload.
34
+ """
35
+ parameters: Optional[AudioClassificationParameters] = None
36
+ """Additional inference parameters"""
37
+
38
+
39
+ @dataclass
40
+ class AudioClassificationOutputElement(BaseInferenceType):
41
+ """Outputs for Audio Classification inference"""
42
+
43
+ label: str
44
+ """The predicted class label."""
45
+ score: float
46
+ """The corresponding probability."""
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/document_question_answering.py ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inference code generated from the JSON schema spec in @huggingface/tasks.
2
+ #
3
+ # See:
4
+ # - script: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-codegen.ts
5
+ # - specs: https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks.
6
+ from dataclasses import dataclass
7
+ from typing import Any, List, Optional, Union
8
+
9
+ from .base import BaseInferenceType
10
+
11
+
12
+ @dataclass
13
+ class DocumentQuestionAnsweringInputData(BaseInferenceType):
14
+ """One (document, question) pair to answer"""
15
+
16
+ image: Any
17
+ """The image on which the question is asked"""
18
+ question: str
19
+ """A question to ask of the document"""
20
+
21
+
22
+ @dataclass
23
+ class DocumentQuestionAnsweringParameters(BaseInferenceType):
24
+ """Additional inference parameters
25
+ Additional inference parameters for Document Question Answering
26
+ """
27
+
28
+ doc_stride: Optional[int] = None
29
+ """If the words in the document are too long to fit with the question for the model, it will
30
+ be split in several chunks with some overlap. This argument controls the size of that
31
+ overlap.
32
+ """
33
+ handle_impossible_answer: Optional[bool] = None
34
+ """Whether to accept impossible as an answer"""
35
+ lang: Optional[str] = None
36
+ """Language to use while running OCR. Defaults to english."""
37
+ max_answer_len: Optional[int] = None
38
+ """The maximum length of predicted answers (e.g., only answers with a shorter length are
39
+ considered).
40
+ """
41
+ max_question_len: Optional[int] = None
42
+ """The maximum length of the question after tokenization. It will be truncated if needed."""
43
+ max_seq_len: Optional[int] = None
44
+ """The maximum length of the total sentence (context + question) in tokens of each chunk
45
+ passed to the model. The context will be split in several chunks (using doc_stride as
46
+ overlap) if needed.
47
+ """
48
+ top_k: Optional[int] = None
49
+ """The number of answers to return (will be chosen by order of likelihood). Can return less
50
+ than top_k answers if there are not enough options available within the context.
51
+ """
52
+ word_boxes: Optional[List[Union[List[float], str]]] = None
53
+ """A list of words and bounding boxes (normalized 0->1000). If provided, the inference will
54
+ skip the OCR step and use the provided bounding boxes instead.
55
+ """
56
+
57
+
58
+ @dataclass
59
+ class DocumentQuestionAnsweringInput(BaseInferenceType):
60
+ """Inputs for Document Question Answering inference"""
61
+
62
+ inputs: DocumentQuestionAnsweringInputData
63
+ """One (document, question) pair to answer"""
64
+ parameters: Optional[DocumentQuestionAnsweringParameters] = None
65
+ """Additional inference parameters"""
66
+
67
+
68
+ @dataclass
69
+ class DocumentQuestionAnsweringOutputElement(BaseInferenceType):
70
+ """Outputs of inference for the Document Question Answering task"""
71
+
72
+ answer: str
73
+ """The answer to the question."""
74
+ end: int
75
+ """The end word index of the answer (in the OCR’d version of the input or provided word
76
+ boxes).
77
+ """
78
+ score: float
79
+ """The probability associated to the answer."""
80
+ start: int
81
+ """The start word index of the answer (in the OCR’d version of the input or provided word
82
+ boxes).
83
+ """
84
+ words: List[int]
85
+ """The index of each word/box pair that is in the answer"""
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/feature_extraction.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inference code generated from the JSON schema spec in @huggingface/tasks.
2
+ #
3
+ # See:
4
+ # - script: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-codegen.ts
5
+ # - specs: https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks.
6
+ from dataclasses import dataclass
7
+ from typing import Literal, Optional
8
+
9
+ from .base import BaseInferenceType
10
+
11
+
12
+ FeatureExtractionInputTruncationDirection = Literal["Left", "Right"]
13
+
14
+
15
+ @dataclass
16
+ class FeatureExtractionInput(BaseInferenceType):
17
+ """Feature Extraction Input.
18
+ Auto-generated from TEI specs.
19
+ For more details, check out
20
+ https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tei-import.ts.
21
+ """
22
+
23
+ inputs: str
24
+ """The text to embed."""
25
+ normalize: Optional[bool] = None
26
+ prompt_name: Optional[str] = None
27
+ """The name of the prompt that should be used by for encoding. If not set, no prompt
28
+ will be applied.
29
+ Must be a key in the `Sentence Transformers` configuration `prompts` dictionary.
30
+ For example if ``prompt_name`` is "query" and the ``prompts`` is {"query": "query: ",
31
+ ...},
32
+ then the sentence "What is the capital of France?" will be encoded as
33
+ "query: What is the capital of France?" because the prompt text will be prepended before
34
+ any text to encode.
35
+ """
36
+ truncate: Optional[bool] = None
37
+ truncation_direction: Optional["FeatureExtractionInputTruncationDirection"] = None
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/image_classification.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inference code generated from the JSON schema spec in @huggingface/tasks.
2
+ #
3
+ # See:
4
+ # - script: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-codegen.ts
5
+ # - specs: https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks.
6
+ from dataclasses import dataclass
7
+ from typing import Literal, Optional
8
+
9
+ from .base import BaseInferenceType
10
+
11
+
12
+ ImageClassificationOutputTransform = Literal["sigmoid", "softmax", "none"]
13
+
14
+
15
+ @dataclass
16
+ class ImageClassificationParameters(BaseInferenceType):
17
+ """Additional inference parameters
18
+ Additional inference parameters for Image Classification
19
+ """
20
+
21
+ function_to_apply: Optional["ImageClassificationOutputTransform"] = None
22
+ """The function to apply to the output."""
23
+ top_k: Optional[int] = None
24
+ """When specified, limits the output to the top K most probable classes."""
25
+
26
+
27
+ @dataclass
28
+ class ImageClassificationInput(BaseInferenceType):
29
+ """Inputs for Image Classification inference"""
30
+
31
+ inputs: str
32
+ """The input image data as a base64-encoded string. If no `parameters` are provided, you can
33
+ also provide the image data as a raw bytes payload.
34
+ """
35
+ parameters: Optional[ImageClassificationParameters] = None
36
+ """Additional inference parameters"""
37
+
38
+
39
+ @dataclass
40
+ class ImageClassificationOutputElement(BaseInferenceType):
41
+ """Outputs of inference for the Image Classification task"""
42
+
43
+ label: str
44
+ """The predicted class label."""
45
+ score: float
46
+ """The corresponding probability."""
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/summarization.py ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inference code generated from the JSON schema spec in @huggingface/tasks.
2
+ #
3
+ # See:
4
+ # - script: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-codegen.ts
5
+ # - specs: https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks.
6
+ from dataclasses import dataclass
7
+ from typing import Any, Dict, Literal, Optional
8
+
9
+ from .base import BaseInferenceType
10
+
11
+
12
+ SummarizationTruncationStrategy = Literal["do_not_truncate", "longest_first", "only_first", "only_second"]
13
+
14
+
15
+ @dataclass
16
+ class SummarizationParameters(BaseInferenceType):
17
+ """Additional inference parameters.
18
+ Additional inference parameters for summarization.
19
+ """
20
+
21
+ clean_up_tokenization_spaces: Optional[bool] = None
22
+ """Whether to clean up the potential extra spaces in the text output."""
23
+ generate_parameters: Optional[Dict[str, Any]] = None
24
+ """Additional parametrization of the text generation algorithm."""
25
+ truncation: Optional["SummarizationTruncationStrategy"] = None
26
+ """The truncation strategy to use."""
27
+
28
+
29
+ @dataclass
30
+ class SummarizationInput(BaseInferenceType):
31
+ """Inputs for Summarization inference"""
32
+
33
+ inputs: str
34
+ """The input text to summarize."""
35
+ parameters: Optional[SummarizationParameters] = None
36
+ """Additional inference parameters."""
37
+
38
+
39
+ @dataclass
40
+ class SummarizationOutput(BaseInferenceType):
41
+ """Outputs of inference for the Summarization task"""
42
+
43
+ summary_text: str
44
+ """The summarized text."""
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/table_question_answering.py ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inference code generated from the JSON schema spec in @huggingface/tasks.
2
+ #
3
+ # See:
4
+ # - script: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-codegen.ts
5
+ # - specs: https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks.
6
+ from dataclasses import dataclass
7
+ from typing import Any, Dict, List, Optional
8
+
9
+ from .base import BaseInferenceType
10
+
11
+
12
+ @dataclass
13
+ class TableQuestionAnsweringInputData(BaseInferenceType):
14
+ """One (table, question) pair to answer"""
15
+
16
+ question: str
17
+ """The question to be answered about the table"""
18
+ table: Dict[str, List[str]]
19
+ """The table to serve as context for the questions"""
20
+
21
+
22
+ @dataclass
23
+ class TableQuestionAnsweringInput(BaseInferenceType):
24
+ """Inputs for Table Question Answering inference"""
25
+
26
+ inputs: TableQuestionAnsweringInputData
27
+ """One (table, question) pair to answer"""
28
+ parameters: Optional[Dict[str, Any]] = None
29
+ """Additional inference parameters"""
30
+
31
+
32
+ @dataclass
33
+ class TableQuestionAnsweringOutputElement(BaseInferenceType):
34
+ """Outputs of inference for the Table Question Answering task"""
35
+
36
+ answer: str
37
+ """The answer of the question given the table. If there is an aggregator, the answer will be
38
+ preceded by `AGGREGATOR >`.
39
+ """
40
+ cells: List[str]
41
+ """List of strings made up of the answer cell values."""
42
+ coordinates: List[List[int]]
43
+ """Coordinates of the cells of the answers."""
44
+ aggregator: Optional[str] = None
45
+ """If the model has an aggregator, this returns the aggregator."""
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/text_classification.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inference code generated from the JSON schema spec in @huggingface/tasks.
2
+ #
3
+ # See:
4
+ # - script: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-codegen.ts
5
+ # - specs: https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks.
6
+ from dataclasses import dataclass
7
+ from typing import Literal, Optional
8
+
9
+ from .base import BaseInferenceType
10
+
11
+
12
+ TextClassificationOutputTransform = Literal["sigmoid", "softmax", "none"]
13
+
14
+
15
+ @dataclass
16
+ class TextClassificationParameters(BaseInferenceType):
17
+ """
18
+ Additional inference parameters for Text Classification.
19
+ """
20
+
21
+ function_to_apply: Optional["TextClassificationOutputTransform"] = None
22
+ """
23
+ The function to apply to the output.
24
+ """
25
+ top_k: Optional[int] = None
26
+ """
27
+ When specified, limits the output to the top K most probable classes.
28
+ """
29
+
30
+
31
+ @dataclass
32
+ class TextClassificationInput(BaseInferenceType):
33
+ """Inputs for Text Classification inference"""
34
+
35
+ inputs: str
36
+ """The text to classify"""
37
+ parameters: Optional[TextClassificationParameters] = None
38
+ """Additional inference parameters"""
39
+
40
+
41
+ @dataclass
42
+ class TextClassificationOutputElement(BaseInferenceType):
43
+ """Outputs of inference for the Text Classification task"""
44
+
45
+ label: str
46
+ """The predicted class label."""
47
+ score: float
48
+ """The corresponding probability."""
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/text_generation.py ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inference code generated from the JSON schema spec in @huggingface/tasks.
2
+ #
3
+ # See:
4
+ # - script: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-codegen.ts
5
+ # - specs: https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks.
6
+ from dataclasses import dataclass
7
+ from typing import Any, List, Literal, Optional
8
+
9
+ from .base import BaseInferenceType
10
+
11
+
12
+ TypeEnum = Literal["json", "regex"]
13
+
14
+
15
+ @dataclass
16
+ class TextGenerationInputGrammarType(BaseInferenceType):
17
+ type: "TypeEnum"
18
+ value: Any
19
+ """A string that represents a [JSON Schema](https://json-schema.org/).
20
+ JSON Schema is a declarative language that allows to annotate JSON documents
21
+ with types and descriptions.
22
+ """
23
+
24
+
25
+ @dataclass
26
+ class TextGenerationInputGenerateParameters(BaseInferenceType):
27
+ adapter_id: Optional[str] = None
28
+ """Lora adapter id"""
29
+ best_of: Optional[int] = None
30
+ """Generate best_of sequences and return the one if the highest token logprobs."""
31
+ decoder_input_details: Optional[bool] = None
32
+ """Whether to return decoder input token logprobs and ids."""
33
+ details: Optional[bool] = None
34
+ """Whether to return generation details."""
35
+ do_sample: Optional[bool] = None
36
+ """Activate logits sampling."""
37
+ frequency_penalty: Optional[float] = None
38
+ """The parameter for frequency penalty. 1.0 means no penalty
39
+ Penalize new tokens based on their existing frequency in the text so far,
40
+ decreasing the model's likelihood to repeat the same line verbatim.
41
+ """
42
+ grammar: Optional[TextGenerationInputGrammarType] = None
43
+ max_new_tokens: Optional[int] = None
44
+ """Maximum number of tokens to generate."""
45
+ repetition_penalty: Optional[float] = None
46
+ """The parameter for repetition penalty. 1.0 means no penalty.
47
+ See [this paper](https://arxiv.org/pdf/1909.05858.pdf) for more details.
48
+ """
49
+ return_full_text: Optional[bool] = None
50
+ """Whether to prepend the prompt to the generated text"""
51
+ seed: Optional[int] = None
52
+ """Random sampling seed."""
53
+ stop: Optional[List[str]] = None
54
+ """Stop generating tokens if a member of `stop` is generated."""
55
+ temperature: Optional[float] = None
56
+ """The value used to module the logits distribution."""
57
+ top_k: Optional[int] = None
58
+ """The number of highest probability vocabulary tokens to keep for top-k-filtering."""
59
+ top_n_tokens: Optional[int] = None
60
+ """The number of highest probability vocabulary tokens to keep for top-n-filtering."""
61
+ top_p: Optional[float] = None
62
+ """Top-p value for nucleus sampling."""
63
+ truncate: Optional[int] = None
64
+ """Truncate inputs tokens to the given size."""
65
+ typical_p: Optional[float] = None
66
+ """Typical Decoding mass
67
+ See [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666)
68
+ for more information.
69
+ """
70
+ watermark: Optional[bool] = None
71
+ """Watermarking with [A Watermark for Large Language
72
+ Models](https://arxiv.org/abs/2301.10226).
73
+ """
74
+
75
+
76
+ @dataclass
77
+ class TextGenerationInput(BaseInferenceType):
78
+ """Text Generation Input.
79
+ Auto-generated from TGI specs.
80
+ For more details, check out
81
+ https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
82
+ """
83
+
84
+ inputs: str
85
+ parameters: Optional[TextGenerationInputGenerateParameters] = None
86
+ stream: Optional[bool] = None
87
+
88
+
89
+ TextGenerationOutputFinishReason = Literal["length", "eos_token", "stop_sequence"]
90
+
91
+
92
+ @dataclass
93
+ class TextGenerationOutputPrefillToken(BaseInferenceType):
94
+ id: int
95
+ logprob: float
96
+ text: str
97
+
98
+
99
+ @dataclass
100
+ class TextGenerationOutputToken(BaseInferenceType):
101
+ id: int
102
+ logprob: float
103
+ special: bool
104
+ text: str
105
+
106
+
107
+ @dataclass
108
+ class TextGenerationOutputBestOfSequence(BaseInferenceType):
109
+ finish_reason: "TextGenerationOutputFinishReason"
110
+ generated_text: str
111
+ generated_tokens: int
112
+ prefill: List[TextGenerationOutputPrefillToken]
113
+ tokens: List[TextGenerationOutputToken]
114
+ seed: Optional[int] = None
115
+ top_tokens: Optional[List[List[TextGenerationOutputToken]]] = None
116
+
117
+
118
+ @dataclass
119
+ class TextGenerationOutputDetails(BaseInferenceType):
120
+ finish_reason: "TextGenerationOutputFinishReason"
121
+ generated_tokens: int
122
+ prefill: List[TextGenerationOutputPrefillToken]
123
+ tokens: List[TextGenerationOutputToken]
124
+ best_of_sequences: Optional[List[TextGenerationOutputBestOfSequence]] = None
125
+ seed: Optional[int] = None
126
+ top_tokens: Optional[List[List[TextGenerationOutputToken]]] = None
127
+
128
+
129
+ @dataclass
130
+ class TextGenerationOutput(BaseInferenceType):
131
+ """Text Generation Output.
132
+ Auto-generated from TGI specs.
133
+ For more details, check out
134
+ https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
135
+ """
136
+
137
+ generated_text: str
138
+ details: Optional[TextGenerationOutputDetails] = None
139
+
140
+
141
+ @dataclass
142
+ class TextGenerationStreamOutputStreamDetails(BaseInferenceType):
143
+ finish_reason: "TextGenerationOutputFinishReason"
144
+ generated_tokens: int
145
+ input_length: int
146
+ seed: Optional[int] = None
147
+
148
+
149
+ @dataclass
150
+ class TextGenerationStreamOutputToken(BaseInferenceType):
151
+ id: int
152
+ logprob: float
153
+ special: bool
154
+ text: str
155
+
156
+
157
+ @dataclass
158
+ class TextGenerationStreamOutput(BaseInferenceType):
159
+ """Text Generation Stream Output.
160
+ Auto-generated from TGI specs.
161
+ For more details, check out
162
+ https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
163
+ """
164
+
165
+ index: int
166
+ token: TextGenerationStreamOutputToken
167
+ details: Optional[TextGenerationStreamOutputStreamDetails] = None
168
+ generated_text: Optional[str] = None
169
+ top_tokens: Optional[List[TextGenerationStreamOutputToken]] = None
minigpt2/lib/python3.10/site-packages/huggingface_hub/inference/_generated/types/zero_shot_object_detection.py ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inference code generated from the JSON schema spec in @huggingface/tasks.
2
+ #
3
+ # See:
4
+ # - script: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-codegen.ts
5
+ # - specs: https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks.
6
+ from dataclasses import dataclass
7
+ from typing import Any, Dict, List, Optional
8
+
9
+ from .base import BaseInferenceType
10
+
11
+
12
+ @dataclass
13
+ class ZeroShotObjectDetectionInputData(BaseInferenceType):
14
+ """The input image data, with candidate labels"""
15
+
16
+ candidate_labels: List[str]
17
+ """The candidate labels for this image"""
18
+ image: Any
19
+ """The image data to generate bounding boxes from"""
20
+
21
+
22
+ @dataclass
23
+ class ZeroShotObjectDetectionInput(BaseInferenceType):
24
+ """Inputs for Zero Shot Object Detection inference"""
25
+
26
+ inputs: ZeroShotObjectDetectionInputData
27
+ """The input image data, with candidate labels"""
28
+ parameters: Optional[Dict[str, Any]] = None
29
+ """Additional inference parameters"""
30
+
31
+
32
+ @dataclass
33
+ class ZeroShotObjectDetectionBoundingBox(BaseInferenceType):
34
+ """The predicted bounding box. Coordinates are relative to the top left corner of the input
35
+ image.
36
+ """
37
+
38
+ xmax: int
39
+ xmin: int
40
+ ymax: int
41
+ ymin: int
42
+
43
+
44
+ @dataclass
45
+ class ZeroShotObjectDetectionOutputElement(BaseInferenceType):
46
+ """Outputs of inference for the Zero Shot Object Detection task"""
47
+
48
+ box: ZeroShotObjectDetectionBoundingBox
49
+ """The predicted bounding box. Coordinates are relative to the top left corner of the input
50
+ image.
51
+ """
52
+ label: str
53
+ """A candidate label"""
54
+ score: float
55
+ """The associated score / probability"""
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__init__.py ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2021 The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License
15
+
16
+ # ruff: noqa: F401
17
+
18
+ from huggingface_hub.errors import (
19
+ BadRequestError,
20
+ CacheNotFound,
21
+ CorruptedCacheException,
22
+ DisabledRepoError,
23
+ EntryNotFoundError,
24
+ FileMetadataError,
25
+ GatedRepoError,
26
+ HfHubHTTPError,
27
+ HFValidationError,
28
+ LocalEntryNotFoundError,
29
+ LocalTokenNotFoundError,
30
+ NotASafetensorsRepoError,
31
+ OfflineModeIsEnabled,
32
+ RepositoryNotFoundError,
33
+ RevisionNotFoundError,
34
+ SafetensorsParsingError,
35
+ )
36
+
37
+ from . import tqdm as _tqdm # _tqdm is the module
38
+ from ._auth import get_stored_tokens, get_token
39
+ from ._cache_assets import cached_assets_path
40
+ from ._cache_manager import (
41
+ CachedFileInfo,
42
+ CachedRepoInfo,
43
+ CachedRevisionInfo,
44
+ DeleteCacheStrategy,
45
+ HFCacheInfo,
46
+ scan_cache_dir,
47
+ )
48
+ from ._chunk_utils import chunk_iterable
49
+ from ._datetime import parse_datetime
50
+ from ._experimental import experimental
51
+ from ._fixes import SoftTemporaryDirectory, WeakFileLock, yaml_dump
52
+ from ._git_credential import list_credential_helpers, set_git_credential, unset_git_credential
53
+ from ._headers import build_hf_headers, get_token_to_send
54
+ from ._hf_folder import HfFolder
55
+ from ._http import (
56
+ configure_http_backend,
57
+ fix_hf_endpoint_in_url,
58
+ get_session,
59
+ hf_raise_for_status,
60
+ http_backoff,
61
+ reset_sessions,
62
+ )
63
+ from ._pagination import paginate
64
+ from ._paths import DEFAULT_IGNORE_PATTERNS, FORBIDDEN_FOLDERS, filter_repo_objects
65
+ from ._runtime import (
66
+ dump_environment_info,
67
+ get_aiohttp_version,
68
+ get_fastai_version,
69
+ get_fastapi_version,
70
+ get_fastcore_version,
71
+ get_gradio_version,
72
+ get_graphviz_version,
73
+ get_hf_hub_version,
74
+ get_hf_transfer_version,
75
+ get_jinja_version,
76
+ get_numpy_version,
77
+ get_pillow_version,
78
+ get_pydantic_version,
79
+ get_pydot_version,
80
+ get_python_version,
81
+ get_tensorboard_version,
82
+ get_tf_version,
83
+ get_torch_version,
84
+ is_aiohttp_available,
85
+ is_colab_enterprise,
86
+ is_fastai_available,
87
+ is_fastapi_available,
88
+ is_fastcore_available,
89
+ is_google_colab,
90
+ is_gradio_available,
91
+ is_graphviz_available,
92
+ is_hf_transfer_available,
93
+ is_jinja_available,
94
+ is_notebook,
95
+ is_numpy_available,
96
+ is_package_available,
97
+ is_pillow_available,
98
+ is_pydantic_available,
99
+ is_pydot_available,
100
+ is_safetensors_available,
101
+ is_tensorboard_available,
102
+ is_tf_available,
103
+ is_torch_available,
104
+ )
105
+ from ._safetensors import SafetensorsFileMetadata, SafetensorsRepoMetadata, TensorInfo
106
+ from ._subprocess import capture_output, run_interactive_subprocess, run_subprocess
107
+ from ._telemetry import send_telemetry
108
+ from ._typing import is_jsonable, is_simple_optional_type, unwrap_simple_optional_type
109
+ from ._validators import smoothly_deprecate_use_auth_token, validate_hf_hub_args, validate_repo_id
110
+ from .tqdm import are_progress_bars_disabled, disable_progress_bars, enable_progress_bars, tqdm, tqdm_stream_file
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_auth.cpython-310.pyc ADDED
Binary file (6.51 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_cache_assets.cpython-310.pyc ADDED
Binary file (5.09 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_cache_manager.cpython-310.pyc ADDED
Binary file (29.2 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_chunk_utils.cpython-310.pyc ADDED
Binary file (1.71 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_fixes.cpython-310.pyc ADDED
Binary file (3.23 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_git_credential.cpython-310.pyc ADDED
Binary file (4.12 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_headers.cpython-310.pyc ADDED
Binary file (8.14 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_hf_folder.cpython-310.pyc ADDED
Binary file (2.71 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_lfs.cpython-310.pyc ADDED
Binary file (3.76 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_paths.cpython-310.pyc ADDED
Binary file (4.44 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_safetensors.cpython-310.pyc ADDED
Binary file (4.98 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_subprocess.cpython-310.pyc ADDED
Binary file (3.92 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_telemetry.cpython-310.pyc ADDED
Binary file (4.64 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_typing.cpython-310.pyc ADDED
Binary file (2.64 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/_validators.cpython-310.pyc ADDED
Binary file (7.85 kB). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/insecure_hashlib.cpython-310.pyc ADDED
Binary file (409 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/huggingface_hub/utils/__pycache__/sha.cpython-310.pyc ADDED
Binary file (2.21 kB). View file