ZTWHHH commited on
Commit
73fc776
·
verified ·
1 Parent(s): 3edc183

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +4 -0
  2. llava_next/share/terminfo/v/vi300-old +0 -0
  3. llava_next/share/terminfo/v/viewdata +0 -0
  4. llava_next/share/terminfo/v/vs100-x10 +0 -0
  5. llava_next/share/terminfo/v/vt102-w +0 -0
  6. parrot/lib/python3.10/site-packages/torch/ao/quantization/pt2e/export_utils.py +223 -0
  7. parrot/lib/python3.10/site-packages/torch/ao/quantization/quantize_pt2e.py +250 -0
  8. videochat2/lib/python3.10/site-packages/tensorflow/python/client/_pywrap_debug_events_writer.so +3 -0
  9. videochat2/lib/python3.10/site-packages/tensorflow/python/client/_pywrap_events_writer.so +3 -0
  10. videochat2/lib/python3.10/site-packages/tensorflow/python/feature_column/__pycache__/feature_column.cpython-310.pyc +3 -0
  11. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/autograph_ops.cpython-310.pyc +0 -0
  12. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/batch_ops.cpython-310.pyc +0 -0
  13. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/bitwise_ops.cpython-310.pyc +0 -0
  14. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/check_ops.cpython-310.pyc +0 -0
  15. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/cond.cpython-310.pyc +0 -0
  16. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/control_flow_grad.cpython-310.pyc +0 -0
  17. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/control_flow_v2_func_graphs.cpython-310.pyc +0 -0
  18. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/critical_section_ops.cpython-310.pyc +0 -0
  19. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/cudnn_rnn_grad.cpython-310.pyc +0 -0
  20. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/data_flow_grad.cpython-310.pyc +0 -0
  21. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/default_gradient.cpython-310.pyc +0 -0
  22. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/embedding_ops.cpython-310.pyc +0 -0
  23. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_batch_ops.cpython-310.pyc +0 -0
  24. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_clustering_ops.cpython-310.pyc +0 -0
  25. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_composite_tensor_ops.cpython-310.pyc +0 -0
  26. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_ctc_ops.cpython-310.pyc +0 -0
  27. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_filesystem_ops.cpython-310.pyc +0 -0
  28. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_functional_ops.cpython-310.pyc +0 -0
  29. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_linalg_ops.cpython-310.pyc +0 -0
  30. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_parsing_ops.cpython-310.pyc +0 -0
  31. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_ragged_conversion_ops.cpython-310.pyc +0 -0
  32. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_ragged_math_ops.cpython-310.pyc +0 -0
  33. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_sendrecv_ops.cpython-310.pyc +0 -0
  34. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_sync_ops.cpython-310.pyc +0 -0
  35. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gradient_checker_v2.cpython-310.pyc +0 -0
  36. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gradients.cpython-310.pyc +0 -0
  37. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/handle_data_util.cpython-310.pyc +0 -0
  38. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/histogram_ops.cpython-310.pyc +0 -0
  39. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/init_ops.cpython-310.pyc +0 -0
  40. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/initializers_ns.cpython-310.pyc +0 -0
  41. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/inplace_ops.cpython-310.pyc +0 -0
  42. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/io_ops.cpython-310.pyc +0 -0
  43. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/linalg_ops.cpython-310.pyc +0 -0
  44. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/list_ops.cpython-310.pyc +0 -0
  45. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/lookup_grad.cpython-310.pyc +0 -0
  46. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/manip_ops.cpython-310.pyc +0 -0
  47. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/nn_grad.cpython-310.pyc +0 -0
  48. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/nn_impl.cpython-310.pyc +0 -0
  49. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/parsing_config.cpython-310.pyc +0 -0
  50. videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/parsing_grad.cpython-310.pyc +0 -0
.gitattributes CHANGED
@@ -845,3 +845,7 @@ videochat2/lib/python3.10/site-packages/tensorflow/python/framework/__pycache__/
845
  videochat2/lib/python3.10/site-packages/tensorflow/python/platform/_pywrap_stacktrace_handler.so filter=lfs diff=lfs merge=lfs -text
846
  videochat2/lib/python3.10/site-packages/tensorflow/python/feature_column/__pycache__/feature_column_v2.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
847
  xverse/lib/python3.10/site-packages/tensorflow_io_gcs_filesystem/core/python/ops/libtensorflow_io_gcs_filesystem.so filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
845
  videochat2/lib/python3.10/site-packages/tensorflow/python/platform/_pywrap_stacktrace_handler.so filter=lfs diff=lfs merge=lfs -text
846
  videochat2/lib/python3.10/site-packages/tensorflow/python/feature_column/__pycache__/feature_column_v2.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
847
  xverse/lib/python3.10/site-packages/tensorflow_io_gcs_filesystem/core/python/ops/libtensorflow_io_gcs_filesystem.so filter=lfs diff=lfs merge=lfs -text
848
+ videochat2/lib/python3.10/site-packages/tensorflow/python/client/_pywrap_events_writer.so filter=lfs diff=lfs merge=lfs -text
849
+ videochat2/lib/python3.10/site-packages/tensorflow/python/client/_pywrap_debug_events_writer.so filter=lfs diff=lfs merge=lfs -text
850
+ videochat2/lib/python3.10/site-packages/tensorflow/python/feature_column/__pycache__/feature_column.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
851
+ videochat2/lib/python3.10/site-packages/tensorflow/python/platform/_pywrap_tf2.so filter=lfs diff=lfs merge=lfs -text
llava_next/share/terminfo/v/vi300-old ADDED
Binary file (650 Bytes). View file
 
llava_next/share/terminfo/v/viewdata ADDED
Binary file (597 Bytes). View file
 
llava_next/share/terminfo/v/vs100-x10 ADDED
Binary file (657 Bytes). View file
 
llava_next/share/terminfo/v/vt102-w ADDED
Binary file (1.3 kB). View file
 
parrot/lib/python3.10/site-packages/torch/ao/quantization/pt2e/export_utils.py ADDED
@@ -0,0 +1,223 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # mypy: allow-untyped-defs
2
+ import types
3
+
4
+ import torch
5
+ import torch.nn.functional as F
6
+
7
+ from torch.ao.quantization.utils import _assert_and_get_unique_device
8
+
9
+
10
+ __all__ = [
11
+ "model_is_exported",
12
+ ]
13
+
14
+
15
+ class _WrapperModule(torch.nn.Module):
16
+ """Class to wrap a callable in an :class:`torch.nn.Module`. Use this if you
17
+ are trying to export a callable.
18
+ """
19
+
20
+ def __init__(self, fn):
21
+ super().__init__()
22
+ self.fn = fn
23
+
24
+ def forward(self, *args, **kwargs):
25
+ """Simple forward that just calls the ``fn`` provided to :meth:`WrapperModule.__init__`."""
26
+ return self.fn(*args, **kwargs)
27
+
28
+
29
+ def model_is_exported(m: torch.nn.Module) -> bool:
30
+ """
31
+ Return True if the `torch.nn.Module` was exported, False otherwise
32
+ (e.g. if the model was FX symbolically traced or not traced at all).
33
+ """
34
+ return isinstance(m, torch.fx.GraphModule) and any(
35
+ "val" in n.meta for n in m.graph.nodes
36
+ )
37
+
38
+
39
+ def _replace_dropout(m: torch.fx.GraphModule, train_to_eval: bool):
40
+ """
41
+ Switch dropout patterns in the model between train and eval modes.
42
+
43
+ Dropout has different behavior in train vs eval mode. For exported models,
44
+ however, calling `model.train()` or `model.eval()` does not automatically switch
45
+ the dropout behavior between the two modes, so here we need to rewrite the aten
46
+ dropout patterns manually to achieve the same effect.
47
+
48
+ See https://github.com/pytorch/pytorch/issues/103681.
49
+ """
50
+ # Avoid circular dependencies
51
+ from .utils import _get_aten_graph_module_for_pattern
52
+
53
+ # Needed to ensure subgraph matches are self-contained
54
+ m.graph.eliminate_dead_code()
55
+ m.recompile()
56
+
57
+ for inplace in [False, True]:
58
+
59
+ def dropout_train(x):
60
+ return F.dropout(x, p=0.5, training=True, inplace=inplace)
61
+
62
+ def dropout_eval(x):
63
+ return F.dropout(x, p=0.5, training=False, inplace=inplace)
64
+
65
+ example_inputs = (torch.randn(1),)
66
+ if train_to_eval:
67
+ match_pattern = _get_aten_graph_module_for_pattern(
68
+ _WrapperModule(dropout_train), example_inputs
69
+ )
70
+ replacement_pattern = _get_aten_graph_module_for_pattern(
71
+ _WrapperModule(dropout_eval), example_inputs
72
+ )
73
+ else:
74
+ match_pattern = _get_aten_graph_module_for_pattern(
75
+ _WrapperModule(dropout_eval), example_inputs
76
+ )
77
+ replacement_pattern = _get_aten_graph_module_for_pattern(
78
+ _WrapperModule(dropout_train), example_inputs
79
+ )
80
+
81
+ from torch.fx.subgraph_rewriter import replace_pattern_with_filters
82
+
83
+ replace_pattern_with_filters(
84
+ m,
85
+ match_pattern,
86
+ replacement_pattern,
87
+ match_filters=[],
88
+ ignore_literals=True,
89
+ )
90
+ m.recompile()
91
+
92
+
93
+ def _replace_batchnorm(m: torch.fx.GraphModule, train_to_eval: bool):
94
+ """
95
+ Switch batchnorm patterns in the model between train and eval modes.
96
+
97
+ Batchnorm has different behavior in train vs eval mode. For exported models,
98
+ however, calling `model.train()` or `model.eval()` does not automatically switch
99
+ the batchnorm behavior between the two modes, so here we need to rewrite the aten
100
+ batchnorm patterns manually to achieve the same effect.
101
+ """
102
+ # TODO(Leslie): This function still fails to support custom momentum and eps value.
103
+ # Enable this support in future updates.
104
+
105
+ # Avoid circular dependencies
106
+ from .utils import _get_aten_graph_module_for_pattern
107
+
108
+ # Needed to ensure subgraph matches are self-contained
109
+ m.graph.eliminate_dead_code()
110
+ m.recompile()
111
+
112
+ def bn_train(
113
+ x: torch.Tensor,
114
+ bn_weight: torch.Tensor,
115
+ bn_bias: torch.Tensor,
116
+ bn_running_mean: torch.Tensor,
117
+ bn_running_var: torch.Tensor,
118
+ ):
119
+ return F.batch_norm(
120
+ x, bn_running_mean, bn_running_var, bn_weight, bn_bias, training=True
121
+ )
122
+
123
+ def bn_eval(
124
+ x: torch.Tensor,
125
+ bn_weight: torch.Tensor,
126
+ bn_bias: torch.Tensor,
127
+ bn_running_mean: torch.Tensor,
128
+ bn_running_var: torch.Tensor,
129
+ ):
130
+ return F.batch_norm(
131
+ x, bn_running_mean, bn_running_var, bn_weight, bn_bias, training=False
132
+ )
133
+
134
+ example_inputs = (
135
+ torch.randn(1, 1, 3, 3), # x
136
+ torch.randn(1), # bn_weight
137
+ torch.randn(1), # bn_bias
138
+ torch.randn(1), # bn_running_mean
139
+ torch.randn(1), # bn_running_var
140
+ )
141
+
142
+ device = _assert_and_get_unique_device(m)
143
+ is_cuda = device is not None and device.type == "cuda"
144
+ bn_train_aten = _get_aten_graph_module_for_pattern(
145
+ _WrapperModule(bn_train),
146
+ example_inputs,
147
+ is_cuda,
148
+ )
149
+ bn_eval_aten = _get_aten_graph_module_for_pattern(
150
+ _WrapperModule(bn_eval),
151
+ example_inputs,
152
+ is_cuda,
153
+ )
154
+
155
+ if train_to_eval:
156
+ match_pattern = bn_train_aten
157
+ replacement_pattern = bn_eval_aten
158
+ else:
159
+ match_pattern = bn_eval_aten
160
+ replacement_pattern = bn_train_aten
161
+
162
+ from torch.fx.subgraph_rewriter import replace_pattern_with_filters
163
+
164
+ replace_pattern_with_filters(
165
+ m,
166
+ match_pattern,
167
+ replacement_pattern,
168
+ match_filters=[],
169
+ ignore_literals=True,
170
+ )
171
+ m.recompile()
172
+
173
+
174
+ # TODO: expose these under this namespace?
175
+ def _move_exported_model_to_eval(model: torch.fx.GraphModule):
176
+ """
177
+ Move an exported GraphModule to eval mode.
178
+
179
+ This is equivalent to model.eval() but only for certain special ops like dropout, batchnorm.
180
+ QAT users should call this before performing inference on the model.
181
+ """
182
+ _replace_dropout(model, train_to_eval=True)
183
+ _replace_batchnorm(model, train_to_eval=True)
184
+ return model
185
+
186
+
187
+ def _move_exported_model_to_train(model: torch.fx.GraphModule):
188
+ """
189
+ Move an exported GraphModule to train mode.
190
+
191
+ This is equivalent to model.train() but only for certain special ops like dropout, batchnorm.
192
+ QAT users should call this before performing training on the model.
193
+ """
194
+ _replace_dropout(model, train_to_eval=False)
195
+ _replace_batchnorm(model, train_to_eval=False)
196
+ return model
197
+
198
+
199
+ def _allow_exported_model_train_eval(model: torch.fx.GraphModule):
200
+ """
201
+ Allow users to call `model.train()` and `model.eval()` on an exported model,
202
+ but with the effect of changing behavior between the two modes limited to special
203
+ ops only, which are currently dropout and batchnorm.
204
+
205
+ Note: This does not achieve the same effect as what `model.train()` and `model.eval()`
206
+ does in eager models, but only provides an approximation. In particular, user code
207
+ branching on `training` flag will not function correctly in general because the branch
208
+ is already specialized at export time. Additionally, other ops beyond dropout and batchnorm
209
+ that have different train/eval behavior will also not be converted properly.
210
+ """
211
+
212
+ def _train(self, mode: bool = True):
213
+ if mode:
214
+ _move_exported_model_to_train(self)
215
+ else:
216
+ _move_exported_model_to_eval(self)
217
+
218
+ def _eval(self):
219
+ _move_exported_model_to_eval(self)
220
+
221
+ model.train = types.MethodType(_train, model) # type: ignore[method-assign]
222
+ model.eval = types.MethodType(_eval, model) # type: ignore[method-assign]
223
+ return model
parrot/lib/python3.10/site-packages/torch/ao/quantization/quantize_pt2e.py ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch.fx import GraphModule
3
+ from torch.fx import Node
4
+
5
+ from .pt2e.prepare import prepare
6
+ from .pt2e.qat_utils import (
7
+ _fuse_conv_bn_qat,
8
+ _fold_conv_bn_qat,
9
+ )
10
+ from .pt2e.utils import (
11
+ _get_node_name_to_scope,
12
+ _fuse_conv_bn_,
13
+ _disallow_eval_train,
14
+ )
15
+ from .pt2e.representation import reference_representation_rewrite
16
+ from .quantize_fx import _convert_to_reference_decomposed_fx
17
+ from torch.ao.quantization.quantizer import ( # noqa: F401
18
+ Quantizer,
19
+ QuantizationSpecBase,
20
+ QuantizationSpec,
21
+ FixedQParamsQuantizationSpec,
22
+ SharedQuantizationSpec,
23
+ DerivedQuantizationSpec,
24
+ QuantizationAnnotation,
25
+ )
26
+ from torch.fx.passes.infra.pass_manager import PassManager
27
+ from torch.ao.quantization.pt2e.duplicate_dq_pass import DuplicateDQPass
28
+ from torch.ao.quantization.pt2e.port_metadata_pass import PortNodeMetaForQDQ
29
+ from torch._export.passes.constant_folding import constant_fold
30
+
31
+ __all__ = [
32
+ "prepare_pt2e",
33
+ "prepare_qat_pt2e",
34
+ "convert_pt2e",
35
+ ]
36
+
37
+
38
+ def prepare_pt2e(
39
+ model: GraphModule,
40
+ quantizer: Quantizer,
41
+ ) -> GraphModule:
42
+ """Prepare a model for post training quantization
43
+
44
+ Args:
45
+ * `model` (torch.fx.GraphModule): a model captured by `torch.export` API
46
+ in the short term we are using `torch._export.capture_pre_autograd_graph`,
47
+ in the long term we'll migrate to some `torch.export` API
48
+ * `quantizer`: A backend specific quantizer that conveys how user want the
49
+ model to be quantized. Tutorial for how to write a quantizer can be found here:
50
+ https://pytorch.org/tutorials/prototype/pt2e_quantizer.html
51
+
52
+ Return:
53
+ A GraphModule with observer (based on quantizer annotation), ready for calibration
54
+
55
+ Example::
56
+
57
+ import torch
58
+ from torch.ao.quantization.quantize_pt2e import prepare_pt2e
59
+ from torch._export import capture_pre_autograd_graph
60
+ from torch.ao.quantization.quantizer import (
61
+ XNNPACKQuantizer,
62
+ get_symmetric_quantization_config,
63
+ )
64
+
65
+ class M(torch.nn.Module):
66
+ def __init__(self):
67
+ super().__init__()
68
+ self.linear = torch.nn.Linear(5, 10)
69
+
70
+ def forward(self, x):
71
+ return self.linear(x)
72
+
73
+ # initialize a floating point model
74
+ float_model = M().eval()
75
+
76
+ # define calibration function
77
+ def calibrate(model, data_loader):
78
+ model.eval()
79
+ with torch.no_grad():
80
+ for image, target in data_loader:
81
+ model(image)
82
+
83
+ # Step 1. program capture
84
+ # NOTE: this API will be updated to torch.export API in the future, but the captured
85
+ # result shoud mostly stay the same
86
+ m = capture_pre_autograd_graph(m, *example_inputs)
87
+ # we get a model with aten ops
88
+
89
+ # Step 2. quantization
90
+ # backend developer will write their own Quantizer and expose methods to allow
91
+ # users to express how they
92
+ # want the model to be quantized
93
+ quantizer = XNNPACKQuantizer().set_global(get_symmetric_quantization_config())
94
+ m = prepare_pt2e(m, quantizer)
95
+
96
+ # run calibration
97
+ # calibrate(m, sample_inference_data)
98
+ """
99
+ torch._C._log_api_usage_once("quantization_api.quantize_pt2e.prepare_pt2e")
100
+ original_graph_meta = model.meta
101
+ node_name_to_scope = _get_node_name_to_scope(model)
102
+ # TODO: check qconfig_mapping to make sure conv and bn are both configured
103
+ # to be quantized before fusion
104
+ # TODO: (maybe) rewrite this with subgraph_rewriter
105
+ _fuse_conv_bn_(model)
106
+ quantizer.transform_for_annotation(model)
107
+ quantizer.annotate(model)
108
+ quantizer.validate(model)
109
+ model = prepare(model, node_name_to_scope, is_qat=False)
110
+ model.meta.update(original_graph_meta)
111
+ model = _disallow_eval_train(model)
112
+ return model
113
+
114
+ def prepare_qat_pt2e(
115
+ model: GraphModule,
116
+ quantizer: Quantizer,
117
+ ) -> GraphModule:
118
+ """Prepare a model for quantization aware training
119
+
120
+ Args:
121
+ * `model` (torch.fx.GraphModule): see :func:`~torch.ao.quantization.quantize_pt2e.prepare_pt2e`
122
+ * `quantizer`: see :func:`~torch.ao.quantization.quantize_pt2e.prepare_pt2e`
123
+
124
+ Return:
125
+ A GraphModule with fake quant modules (based on quantizer annotation), ready for
126
+ quantization aware training
127
+
128
+ Example::
129
+ import torch
130
+ from torch.ao.quantization.quantize_pt2e import prepare_qat_pt2e
131
+ from torch._export import capture_pre_autograd_graph
132
+ from torch.ao.quantization.quantizer import (
133
+ XNNPACKQuantizer,
134
+ get_symmetric_quantization_config,
135
+ )
136
+
137
+ class M(torch.nn.Module):
138
+ def __init__(self):
139
+ super().__init__()
140
+ self.linear = torch.nn.Linear(5, 10)
141
+
142
+ def forward(self, x):
143
+ return self.linear(x)
144
+
145
+ # initialize a floating point model
146
+ float_model = M().eval()
147
+
148
+ # define the training loop for quantization aware training
149
+ def train_loop(model, train_data):
150
+ model.train()
151
+ for image, target in data_loader:
152
+ ...
153
+
154
+ # Step 1. program capture
155
+ # NOTE: this API will be updated to torch.export API in the future, but the captured
156
+ # result shoud mostly stay the same
157
+ m = capture_pre_autograd_graph(m, *example_inputs)
158
+ # we get a model with aten ops
159
+
160
+ # Step 2. quantization
161
+ # backend developer will write their own Quantizer and expose methods to allow
162
+ # users to express how they
163
+ # want the model to be quantized
164
+ quantizer = XNNPACKQuantizer().set_global(get_symmetric_quantization_config())
165
+ m = prepare_qat_pt2e(m, quantizer)
166
+
167
+ # run quantization aware training
168
+ train_loop(prepared_model, train_loop)
169
+
170
+ """
171
+ torch._C._log_api_usage_once("quantization_api.quantize_pt2e.prepare_qat_pt2e")
172
+ original_graph_meta = model.meta
173
+ node_name_to_scope = _get_node_name_to_scope(model)
174
+ quantizer.transform_for_annotation(model)
175
+ quantizer.annotate(model)
176
+ quantizer.validate(model)
177
+ # Perform fusion after annotate to avoid quantizing ops in the new
178
+ # subgraph that don't need to be quantized
179
+ # TODO: only fuse if conv and bn are both configured to be quantized
180
+ _fuse_conv_bn_qat(model)
181
+ model = prepare(model, node_name_to_scope, is_qat=True)
182
+ model.meta.update(original_graph_meta)
183
+ model = _disallow_eval_train(model)
184
+ return model
185
+
186
+ _QUANT_OPS = [
187
+ torch.ops.quantized_decomposed.quantize_per_tensor.default,
188
+ torch.ops.quantized_decomposed.quantize_per_tensor.tensor,
189
+ torch.ops.quantized_decomposed.quantize_per_channel.default,
190
+ ]
191
+ def _quant_node_constraint(n: Node) -> bool:
192
+ """If there is any pure ops between get_attr and quantize op they will be const propagated
193
+ e.g. get_attr(weight) -> transpose -> quantize -> dequantize*
194
+ (Note: dequantize op is not going to be constant propagated)
195
+
196
+ This filter is added because we don't want to constant fold the things that are not
197
+ related to quantization
198
+ """
199
+ return n.op == "call_function" and n.target in _QUANT_OPS
200
+
201
+ def convert_pt2e(
202
+ model: GraphModule,
203
+ use_reference_representation: bool = False,
204
+ fold_quantize: bool = True,
205
+ ) -> GraphModule:
206
+ """Convert a calibrated/trained model to a quantized model
207
+
208
+ Args:
209
+ * `model` (torch.fx.GraphModule): calibrated/trained model
210
+ * `use_reference_representation` (bool): boolean flag to indicate whether to produce referece representation or not
211
+ * `fold_quantize` (bool): boolean flag for whether fold the quantize op or not
212
+
213
+ Returns:
214
+ quantized model, either in q/dq representation or reference representation
215
+
216
+ Example::
217
+
218
+ # prepared_model: the model produced by `prepare_pt2e`/`prepare_qat_pt2e` and calibration/training
219
+ # `convert_pt2e` produces a quantized model that represents quantized computation with
220
+ # quantize dequantize ops and fp32 ops by default.
221
+ # Please refer to
222
+ # https://pytorch.org/tutorials/prototype/pt2e_quant_ptq_static.html#convert-the-calibrated-model-to-a-quantized-model
223
+ # for detailed explanation of output quantized model
224
+ quantized_model = convert_pt2e(prepared_model)
225
+
226
+ """ # flake8: noqa
227
+ torch._C._log_api_usage_once("quantization_api.quantize_pt2e.convert_pt2e")
228
+ if not isinstance(use_reference_representation, bool):
229
+ raise ValueError(
230
+ "Unexpected argument type for `use_reference_representation`, "
231
+ f"please make sure you intend to pass argument {use_reference_representation} to convert_pt2e")
232
+ original_graph_meta = model.meta
233
+ model = _convert_to_reference_decomposed_fx(model)
234
+ model = _fold_conv_bn_qat(model)
235
+
236
+ pm = PassManager([DuplicateDQPass()])
237
+ model = pm(model).graph_module
238
+
239
+ pm = PassManager([PortNodeMetaForQDQ()])
240
+ model = pm(model).graph_module
241
+
242
+ if fold_quantize:
243
+ constant_fold(model, _quant_node_constraint)
244
+
245
+ if use_reference_representation:
246
+ model = reference_representation_rewrite(model)
247
+
248
+ model.meta.update(original_graph_meta)
249
+ model = _disallow_eval_train(model)
250
+ return model
videochat2/lib/python3.10/site-packages/tensorflow/python/client/_pywrap_debug_events_writer.so ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6410e02866ba376eecc5a71442d6d4f5b8138ee98c33c2f370db6ca19608b949
3
+ size 286768
videochat2/lib/python3.10/site-packages/tensorflow/python/client/_pywrap_events_writer.so ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2e0638d05108349baaab8ac9d1fedf0c28fb688690a788da263e5f5c6f6e106
3
+ size 304856
videochat2/lib/python3.10/site-packages/tensorflow/python/feature_column/__pycache__/feature_column.cpython-310.pyc ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14e60574028e290d83dc8dc2b13c49f11c3535713c4c492e44424b5d462eb883
3
+ size 100200
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/autograph_ops.cpython-310.pyc ADDED
Binary file (4.59 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/batch_ops.cpython-310.pyc ADDED
Binary file (4.83 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/bitwise_ops.cpython-310.pyc ADDED
Binary file (595 Bytes). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/check_ops.cpython-310.pyc ADDED
Binary file (70.7 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/cond.cpython-310.pyc ADDED
Binary file (12.7 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/control_flow_grad.cpython-310.pyc ADDED
Binary file (5.51 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/control_flow_v2_func_graphs.cpython-310.pyc ADDED
Binary file (1.76 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/critical_section_ops.cpython-310.pyc ADDED
Binary file (13 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/cudnn_rnn_grad.cpython-310.pyc ADDED
Binary file (2.77 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/data_flow_grad.cpython-310.pyc ADDED
Binary file (2.57 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/default_gradient.cpython-310.pyc ADDED
Binary file (2.14 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/embedding_ops.cpython-310.pyc ADDED
Binary file (35.8 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_batch_ops.cpython-310.pyc ADDED
Binary file (22.4 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_clustering_ops.cpython-310.pyc ADDED
Binary file (7.9 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_composite_tensor_ops.cpython-310.pyc ADDED
Binary file (5.91 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_ctc_ops.cpython-310.pyc ADDED
Binary file (14.8 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_filesystem_ops.cpython-310.pyc ADDED
Binary file (2.75 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_functional_ops.cpython-310.pyc ADDED
Binary file (38.2 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_linalg_ops.cpython-310.pyc ADDED
Binary file (67.5 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_parsing_ops.cpython-310.pyc ADDED
Binary file (82.4 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_ragged_conversion_ops.cpython-310.pyc ADDED
Binary file (19.7 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_ragged_math_ops.cpython-310.pyc ADDED
Binary file (4.41 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_sendrecv_ops.cpython-310.pyc ADDED
Binary file (6.48 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_sync_ops.cpython-310.pyc ADDED
Binary file (2.33 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gradient_checker_v2.cpython-310.pyc ADDED
Binary file (11.5 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gradients.cpython-310.pyc ADDED
Binary file (803 Bytes). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/handle_data_util.cpython-310.pyc ADDED
Binary file (2.75 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/histogram_ops.cpython-310.pyc ADDED
Binary file (4.32 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/init_ops.cpython-310.pyc ADDED
Binary file (61.1 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/initializers_ns.cpython-310.pyc ADDED
Binary file (956 Bytes). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/inplace_ops.cpython-310.pyc ADDED
Binary file (6.38 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/io_ops.cpython-310.pyc ADDED
Binary file (18.9 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/linalg_ops.cpython-310.pyc ADDED
Binary file (28.9 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/list_ops.cpython-310.pyc ADDED
Binary file (10.5 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/lookup_grad.cpython-310.pyc ADDED
Binary file (893 Bytes). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/manip_ops.cpython-310.pyc ADDED
Binary file (754 Bytes). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/nn_grad.cpython-310.pyc ADDED
Binary file (27.8 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/nn_impl.cpython-310.pyc ADDED
Binary file (78.5 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/parsing_config.cpython-310.pyc ADDED
Binary file (32.8 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/parsing_grad.cpython-310.pyc ADDED
Binary file (445 Bytes). View file