ZTWHHH commited on
Commit
4a97e93
·
verified ·
1 Parent(s): ac71104

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +2 -0
  2. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__init__.py +0 -0
  3. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/backprop.cpython-310.pyc +0 -0
  4. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/benchmarks_test_base.cpython-310.pyc +0 -0
  5. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/core.cpython-310.pyc +0 -0
  6. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/def_function.cpython-310.pyc +0 -0
  7. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/forwardprop.cpython-310.pyc +0 -0
  8. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/function.cpython-310.pyc +0 -0
  9. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/graph_only_ops.cpython-310.pyc +0 -0
  10. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/imperative_grad.cpython-310.pyc +0 -0
  11. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/monitoring.cpython-310.pyc +0 -0
  12. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/record.cpython-310.pyc +0 -0
  13. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/tape.cpython-310.pyc +0 -0
  14. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/test.cpython-310.pyc +0 -0
  15. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/backprop.py +1345 -0
  16. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/backprop_util.py +105 -0
  17. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/benchmarks_test_base.py +77 -0
  18. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/cancellation.py +62 -0
  19. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/context.py +0 -0
  20. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/core.py +78 -0
  21. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/def_function.py +28 -0
  22. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/execute.py +329 -0
  23. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/executor.py +77 -0
  24. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/forwardprop.py +487 -0
  25. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/forwardprop_util.py +74 -0
  26. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/function.py +37 -0
  27. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/graph_only_ops.py +46 -0
  28. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/imperative_grad.py +73 -0
  29. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/lift_to_graph.py +365 -0
  30. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/memory_tests/__init__.py +0 -0
  31. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/memory_tests/__pycache__/__init__.cpython-310.pyc +0 -0
  32. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/memory_tests/__pycache__/memory_test_util.cpython-310.pyc +0 -0
  33. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/memory_tests/memory_test_util.py +73 -0
  34. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/monitoring.py +542 -0
  35. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__init__.py +0 -0
  36. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/__init__.cpython-310.pyc +0 -0
  37. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/atomic_function.cpython-310.pyc +0 -0
  38. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/attributes.cpython-310.pyc +0 -0
  39. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/autograph_util.cpython-310.pyc +0 -0
  40. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/compiler_ir.cpython-310.pyc +0 -0
  41. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/composite_tensor_utils.cpython-310.pyc +0 -0
  42. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/concrete_function.cpython-310.pyc +0 -0
  43. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/eager_function_run.cpython-310.pyc +0 -0
  44. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/function_context.cpython-310.pyc +0 -0
  45. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/function_type_utils.cpython-310.pyc +0 -0
  46. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/polymorphic_function.cpython-310.pyc +0 -0
  47. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/saved_model_exported_concrete.cpython-310.pyc +0 -0
  48. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/saved_model_utils.cpython-310.pyc +0 -0
  49. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/tf_method_target.cpython-310.pyc +0 -0
  50. videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/tracing_compilation.cpython-310.pyc +0 -0
.gitattributes CHANGED
@@ -870,3 +870,5 @@ videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/image_
870
  videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_sparse_ops.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
871
  videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/array_ops.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
872
  videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_image_ops.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
 
 
 
870
  videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_sparse_ops.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
871
  videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/array_ops.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
872
  videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_image_ops.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
873
+ videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_experimental_dataset_ops.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
874
+ videochat2/lib/python3.10/site-packages/tensorflow/python/ops/__pycache__/gen_tpu_ops.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__init__.py ADDED
File without changes
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/backprop.cpython-310.pyc ADDED
Binary file (44.9 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/benchmarks_test_base.cpython-310.pyc ADDED
Binary file (2.4 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/core.cpython-310.pyc ADDED
Binary file (2.6 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/def_function.cpython-310.pyc ADDED
Binary file (597 Bytes). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/forwardprop.cpython-310.pyc ADDED
Binary file (17 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/function.cpython-310.pyc ADDED
Binary file (913 Bytes). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/graph_only_ops.cpython-310.pyc ADDED
Binary file (1.2 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/imperative_grad.cpython-310.pyc ADDED
Binary file (2.22 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/monitoring.cpython-310.pyc ADDED
Binary file (16.5 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/record.cpython-310.pyc ADDED
Binary file (4.17 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/tape.cpython-310.pyc ADDED
Binary file (3.2 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/__pycache__/test.cpython-310.pyc ADDED
Binary file (559 Bytes). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/backprop.py ADDED
@@ -0,0 +1,1345 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Code for backpropagation using the tape utilities."""
16
+
17
+ # TODO(b/159343581): Properly support CompositeTensor in all functions in this
18
+ # file.
19
+
20
+ import functools
21
+ import operator
22
+
23
+ from tensorflow.python import pywrap_tfe
24
+ from tensorflow.python.eager import backprop_util
25
+ from tensorflow.python.eager import context
26
+ from tensorflow.python.eager import execute
27
+ from tensorflow.python.eager import imperative_grad
28
+ from tensorflow.python.eager import tape
29
+ from tensorflow.python.framework import composite_tensor
30
+ from tensorflow.python.framework import composite_tensor_gradient
31
+ from tensorflow.python.framework import constant_op
32
+ from tensorflow.python.framework import dtypes
33
+ from tensorflow.python.framework import indexed_slices
34
+ from tensorflow.python.framework import ops
35
+ from tensorflow.python.framework import tensor as tensor_lib
36
+ from tensorflow.python.framework import tensor_shape
37
+ from tensorflow.python.framework import tensor_util
38
+ from tensorflow.python.framework import type_spec
39
+ from tensorflow.python.ops import array_ops
40
+ from tensorflow.python.ops import check_ops
41
+ from tensorflow.python.ops import control_flow_util
42
+ from tensorflow.python.ops import default_gradient
43
+ from tensorflow.python.ops import gen_array_ops
44
+ from tensorflow.python.ops import gen_math_ops
45
+ from tensorflow.python.ops import gradients_impl # pylint: disable=unused-import
46
+ from tensorflow.python.ops import resource_variable_ops
47
+ from tensorflow.python.ops.parallel_for import control_flow_ops as pfor_ops
48
+ from tensorflow.python.ops.unconnected_gradients import UnconnectedGradients
49
+ from tensorflow.python.platform import tf_logging as logging
50
+ from tensorflow.python.util import _pywrap_utils
51
+ from tensorflow.python.util import nest
52
+ from tensorflow.python.util import tf_contextlib
53
+ from tensorflow.python.util import tf_inspect
54
+ from tensorflow.python.util import variable_utils
55
+ from tensorflow.python.util.tf_export import tf_export
56
+
57
+
58
+ _op_attr_type_cache = {}
59
+
60
+
61
+ def op_attr_type(op_type, attr_name):
62
+ try:
63
+ return _op_attr_type_cache[(op_type, attr_name)]
64
+ except KeyError:
65
+ context.ensure_initialized()
66
+ h = context.context()._handle # pylint: disable=protected-access
67
+ attr_type = pywrap_tfe.TFE_OpNameGetAttrType(h, op_type, attr_name)
68
+ _op_attr_type_cache[(op_type, attr_name)] = attr_type
69
+ return attr_type
70
+
71
+
72
+ def make_attr(attr_type, value):
73
+ # pybind11 enums do not return the raw value like SWIG enums do. They are
74
+ # useful when comparing amongst each other but not direct integers as we are
75
+ # doing in most tests.
76
+ # https://pybind11.readthedocs.io/en/stable/classes.html#enumerations-and-internal-types
77
+ # TODO(amitpatankar): After all SWIG transitions, convert the enum comparisons
78
+ # from integer value to class.
79
+ if attr_type == int(pywrap_tfe.TF_ATTR_TYPE):
80
+ return dtypes.as_dtype(value)
81
+ if attr_type == [int(pywrap_tfe.TF_ATTR_TYPE)]:
82
+ return [dtypes.as_dtype(v) for v in value]
83
+ if attr_type == int(pywrap_tfe.TF_ATTR_SHAPE):
84
+ return tensor_shape.as_shape(value).as_proto()
85
+ if attr_type == [int(pywrap_tfe.TF_ATTR_SHAPE)]:
86
+ return [tensor_shape.as_shape(v).as_proto() for v in value]
87
+ return nest.map_structure(
88
+ lambda v: v.encode() if isinstance(v, str) else v,
89
+ value)
90
+
91
+
92
+ class _MockOp(object):
93
+ """Pretends to be a tf.Operation for the gradient functions."""
94
+
95
+ def __init__(self, attrs, inputs, outputs, typ, skip_input_indices):
96
+ self.attrs = attrs
97
+ self.inputs = inputs
98
+ self.outputs = outputs
99
+ self.type = typ
100
+ self.skip_input_indices = skip_input_indices
101
+
102
+ def get_attr(self, attr):
103
+ typ = op_attr_type(self.type, attr)
104
+ for i in range(0, len(self.attrs), 2):
105
+ if self.attrs[i] == attr:
106
+ return make_attr(typ, self.attrs[i + 1])
107
+ raise KeyError(attr)
108
+
109
+ def _get_control_flow_context(self):
110
+ raise NotImplementedError(
111
+ "tf.GradientTape.gradients() does not support graph control flow "
112
+ "operations like tf.cond or tf.while at this time. Use tf.gradients() "
113
+ "instead. If you need this feature, please file a feature request at "
114
+ "https://github.com/tensorflow/tensorflow/issues/new"
115
+ )
116
+
117
+
118
+ def _gradient_function(op_name, attr_tuple, num_inputs, inputs, outputs,
119
+ out_grads, skip_input_indices, forward_pass_name_scope):
120
+ """Calls the gradient function of the op.
121
+
122
+ Args:
123
+ op_name: the name of the op to be differentiated.
124
+ attr_tuple: the attrs, as a tuple.
125
+ num_inputs: the number of inputs to the op.
126
+ inputs: inputs to the original operation.
127
+ outputs: outputs to the original operation.
128
+ out_grads: gradients of the operation wrt its outputs.
129
+ skip_input_indices: a tuple that is passed to the gradient function,
130
+ indicating which inputs to skip calculating the gradient for
131
+ forward_pass_name_scope: the namescope of the op in the forward pass.
132
+
133
+ Returns:
134
+ The gradients with respect to the inputs of the function, as a list.
135
+ """
136
+ mock_op = _MockOp(attr_tuple, inputs, outputs, op_name, skip_input_indices)
137
+ grad_fn = ops._gradient_registry.lookup(op_name) # pylint: disable=protected-access
138
+ if grad_fn is None:
139
+ return [None] * num_inputs
140
+
141
+ # This does not work with v1 TensorArrays.
142
+ if ops.executing_eagerly_outside_functions(
143
+ ) or control_flow_util.EnableControlFlowV2(ops.get_default_graph()):
144
+ gradient_name_scope = "gradient_tape/"
145
+ if forward_pass_name_scope:
146
+ gradient_name_scope += forward_pass_name_scope + "/"
147
+ with ops.name_scope(gradient_name_scope):
148
+ return grad_fn(mock_op, *out_grads)
149
+ else:
150
+ return grad_fn(mock_op, *out_grads)
151
+
152
+
153
+ pywrap_tfe.TFE_Py_RegisterGradientFunction(_gradient_function)
154
+
155
+
156
+ def _must_record_gradient():
157
+ return not pywrap_tfe.TFE_Py_TapeSetIsEmpty()
158
+
159
+
160
+ @tf_export("__internal__.record_gradient", v1=[])
161
+ def record_gradient(op_name, inputs, attrs, outputs):
162
+ """Explicitly record the gradient for a given op.
163
+
164
+ Args:
165
+ op_name: The op name as listed in the `OpDef` for the op.
166
+ inputs: A list of tensor inputs to the op.
167
+ attrs: The op attributes as a flattened list of alternating attribute names
168
+ and attribute values.
169
+ outputs: A list of tensor outputs from the op.
170
+ """
171
+ pywrap_tfe.TFE_Py_RecordGradient(op_name, inputs, attrs, outputs,
172
+ ops.get_name_scope())
173
+
174
+
175
+ execute.must_record_gradient = _must_record_gradient
176
+ execute.record_gradient = record_gradient
177
+
178
+
179
+ def implicit_val_and_grad(f):
180
+ """Returns a function which differentiates f with respect to variables.
181
+
182
+ The wrapped function returns the value and the gradient of f when called with
183
+ the same arguments. The gradient is with respect to all trainable TFE
184
+ variables accessed by `f`.
185
+
186
+ This function is useful when the exact set of variables to differentiate with
187
+ is not known ahead of time.
188
+
189
+ Example:
190
+
191
+ ```python
192
+ dense_layer = tf.compat.v1.layers.Dense(1)
193
+ def loss(x, y):
194
+ return tf.reduce_sum(tf.square(dense_layer(x) - y))
195
+
196
+ # Obtain the gradient function.
197
+ val_grad_fn = tfe.implicit_value_and_gradients(loss)
198
+
199
+ # Invoke the gradient function with concrete values of x and y.
200
+ x = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
201
+ y = tf.constant([[10.0], [20.0]])
202
+ value, grads_and_vars = val_grad_fn(x, y)
203
+ print('Value of loss: %s' % value)
204
+
205
+ # Apply the gradients to Variables.
206
+ optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.1)
207
+ optimizer.apply_gradients(grads_and_vars)
208
+ ```
209
+
210
+ Args:
211
+ f: function to be differentiated. If `f` returns a scalar, this scalar will
212
+ be differentiated. If `f` returns a tensor or list of tensors, by default
213
+ a scalar will be computed by adding all their values to produce a single
214
+ scalar.
215
+
216
+ Returns:
217
+ A function which, when called, returns a tuple pair.
218
+ Its first element is the value to which the function evaluates.
219
+ Its second element is list of (gradient, variable) pairs.
220
+
221
+ Raises:
222
+ ValueError: if `f` returns None.
223
+ """
224
+ # TODO(cais): Remove calls to tf.constant() once the gradients functions
225
+ # accept lists and np.ndarrays.
226
+
227
+ def grad_fn(*args, **kwds):
228
+ """Computes the gradient of the wrapped function."""
229
+ this_tape = tape.push_new_tape()
230
+ try:
231
+ end_node = f(*args, **kwds)
232
+ if end_node is None:
233
+ raise ValueError("Cannot differentiate a function that returns None; "
234
+ "did you forget to return a value from {}?".format(
235
+ f.__name__))
236
+ finally:
237
+ tape.pop_tape(this_tape)
238
+ # Note: variables are returned in construction order. This ensures unique
239
+ # order across executions.
240
+ variables = this_tape.watched_variables()
241
+ if not variables:
242
+ raise ValueError("No trainable variables were accessed while the "
243
+ "function was being computed.")
244
+
245
+ sources = [v.handle for v in variables]
246
+ for s in sources:
247
+ if getattr(s, "is_packed", False):
248
+ raise ValueError(
249
+ "GradientTape.gradient is not supported on packed EagerTensors yet."
250
+ )
251
+ grad = imperative_grad.imperative_grad(this_tape, nest.flatten(end_node),
252
+ sources)
253
+ return end_node, list(zip(grad, variables))
254
+
255
+ return grad_fn
256
+
257
+
258
+ def implicit_grad(f):
259
+ """Returns a function which differentiates f with respect to variables.
260
+
261
+ The wrapped function returns the gradient of f when called with the same
262
+ arguments. The gradient is with respect to all trainable TFE variables
263
+ accessed by `f`.
264
+
265
+ This function is useful when the exact set of variables to differentiate with
266
+ is not known ahead of time.
267
+
268
+ Example:
269
+
270
+ ```python
271
+ dense_layer = tf.compat.v1.layers.Dense(1)
272
+ def loss(x, y):
273
+ return tf.reduce_sum(tf.square(dense_layer(x) - y))
274
+
275
+ # Obtain the gradient function.
276
+ grad_fn = tfe.implicit_gradients(loss)
277
+
278
+ # Invoke the gradient function with concrete values of x and y.
279
+ x = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
280
+ y = tf.constant([[10.0], [20.0]])
281
+ grads_and_vars = grad_fn(x, y)
282
+
283
+ # Apply the gradients to Variables.
284
+ optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.1)
285
+ optimizer.apply_gradients(grads_and_vars)
286
+ ```
287
+
288
+ Args:
289
+ f: function to be differentiated. If `f` returns a scalar, this scalar will
290
+ be differentiated. If `f` returns a tensor or list of tensors, by default
291
+ a scalar will be computed by adding all their values to produce a single
292
+ scalar.
293
+
294
+ Returns:
295
+ A function which, when called, returns a list of (gradient, variable) pairs.
296
+ """
297
+ # TODO(cais): Remove calls to tf.constant() once the gradients functions
298
+ # accept lists and np.ndarrays.
299
+
300
+ def grad_fn(*args, **kwds):
301
+ """Computes the gradient of the wrapped function."""
302
+ return implicit_val_and_grad(f)(*args, **kwds)[1]
303
+
304
+ return grad_fn
305
+
306
+
307
+ def _get_arg_spec(f, params, param_args):
308
+ """The positions of the parameters of f to be differentiated in param_args."""
309
+ try:
310
+ args = tf_inspect.getfullargspec(f).args
311
+ except TypeError as e:
312
+ # TypeError can happen when f is a callable object.
313
+ if params is None:
314
+ return range(len(param_args))
315
+ elif all(isinstance(x, int) for x in params):
316
+ return params
317
+ raise ValueError("Either callable provided is not a function or could not "
318
+ "inspect its arguments by name: %s. Original error: %s"
319
+ % (f, e))
320
+ if params is None:
321
+ if not args:
322
+ return range(len(param_args))
323
+ if args[0] == "self":
324
+ return range(len(args) - 1)
325
+ else:
326
+ return range(len(args))
327
+ elif all(isinstance(x, str) for x in params):
328
+ return [args.index(n) for n in params]
329
+ elif all(isinstance(x, int) for x in params):
330
+ return params
331
+ else:
332
+ raise ValueError(
333
+ "params must be all strings or all integers; got %s." % params)
334
+
335
+
336
+ def gradients_function(f, params=None):
337
+ """Returns a function which differentiates f with respect to params.
338
+
339
+ Example:
340
+ ```python
341
+ # f(x, y) = (x ^ 3) * y - x * (y ^ 2)
342
+ # Therefore, the 1st order derivatives are:
343
+ # df / dx = 3 * (x ^ 2) * y - y ^ 2
344
+ # df / dy = x ^ 3 - 2 * x * y
345
+ # The 2nd order derivatives with respect to x is:
346
+ # d^2 f / (dx)^2 = 6 * x * y
347
+ def f(x, y):
348
+ return x * x * x * y - x * y * y
349
+
350
+ # Obtain a function that returns 1st order gradients.
351
+ grad_fn = tfe.gradients_function(f)
352
+
353
+ x = 2.0
354
+ y = 3.0
355
+
356
+ # Invoke the 1st order gradient function.
357
+ x_grad, y_grad = grad_fn(x, y)
358
+ assert x_grad.numpy() == 3 * (2 ** 2) * 3 - 3 ** 2
359
+ assert y_grad.numpy() == (2 ** 3) - 2 * 2 * 3
360
+
361
+ # Obtain a function that returns the 2nd order gradient with respect to x.
362
+ gradgrad_fn = tfe.gradients_function(lambda x, y: grad_fn(x, y)[0])
363
+
364
+ # Invoke the 2nd order gradient function.
365
+ x_gradgrad = gradgrad_fn(x, y)[0]
366
+ assert x_gradgrad.numpy() == 6 * 2 * 3
367
+
368
+ # To obtain a callable that returns the gradient(s) of `f` with respect to a
369
+ # subset of its inputs, use the `params` keyword argument with
370
+ # `gradients_function()`.
371
+ ygrad_fn = tfe.gradients_function(f, params=[1])
372
+
373
+ (y_grad,) = ygrad_fn(x, y)
374
+ assert y_grad.numpy() == (2 ** 3) - 2 * 2 * 3
375
+ ```
376
+
377
+ Note that only tensors with real or complex dtypes are differentiable.
378
+
379
+ Args:
380
+ f: function to be differentiated. If `f` returns a scalar, this scalar will
381
+ be differentiated. If `f` returns a tensor or list of tensors, by default
382
+ a scalar will be computed by adding all their values to produce a single
383
+ scalar. If desired, the tensors can be elementwise multiplied by the
384
+ tensors passed as the `dy` keyword argument to the returned gradient
385
+ function.
386
+ params: list of parameter names of f or list of integers indexing the
387
+ parameters with respect to which we'll differentiate. Passing None
388
+ differentiates with respect to all parameters.
389
+
390
+ Returns:
391
+ function which, when called, returns the value of f and the gradient
392
+ of `f` with respect to all of `params`. The function takes an extra optional
393
+ keyword argument `dy`. Setting it allows computation of vector jacobian
394
+ products for vectors other than the vector of ones.
395
+
396
+ Raises:
397
+ ValueError: if the params are not all strings or all integers.
398
+ """
399
+
400
+ def decorated(*args, **kwds):
401
+ """Computes the gradient of the decorated function."""
402
+
403
+ _, grad = val_and_grad_function(f, params=params)(*args, **kwds)
404
+ return grad
405
+
406
+ return decorated
407
+
408
+
409
+ def _ensure_unique_tensor_objects(parameter_positions, args):
410
+ """Make each of the parameter_positions in args a unique tensor_lib.Tensor object.
411
+
412
+ Ensure that each parameter is treated independently.
413
+ For example:
414
+
415
+ def f(x, y): return x * y
416
+ g = gradients_function(f)
417
+ one = tf.constant(1.)
418
+
419
+ g(one, one) should return [1., 1.]
420
+ (even though the two arguments are the same Tensor object).
421
+
422
+ Args:
423
+ parameter_positions: List of indices into args defining the arguments to
424
+ differentiate against.
425
+ args: A list of arguments to the function to be differentiated.
426
+
427
+ Returns:
428
+ args, possibly edited in-place.
429
+ """
430
+ s = set()
431
+ for (i, t) in enumerate(args):
432
+ if i in parameter_positions:
433
+ tid = ops.tensor_id(t)
434
+ if tid in s:
435
+ args[i] = gen_array_ops.identity(args[i])
436
+ else:
437
+ s.add(tid)
438
+ return args
439
+
440
+
441
+ def val_and_grad_function(f, params=None):
442
+ """Returns a function that computes f and its derivative w.r.t. params.
443
+
444
+ Example:
445
+ ```python
446
+ # f(x, y) = (x ^ 3) * y - x * (y ^ 2)
447
+ # Therefore, the 1st order derivatives are:
448
+ # df / dx = 3 * (x ^ 2) * y - y ^ 2
449
+ # df / dy = x ^ 3 - 2 * x * y
450
+ def f(x, y):
451
+ return x * x * x * y - x * y * y
452
+
453
+ # Obtain a function that returns the function value and the 1st order
454
+ # gradients.
455
+ val_grads_fn = tfe.value_and_gradients_function(f)
456
+
457
+ x = 2.0
458
+ y = 3.0
459
+
460
+ # Invoke the value-and-gradients function.
461
+ f_val, (x_grad, y_grad) = val_grads_fn(x, y)
462
+ assert f_val.numpy() == (2 ** 3) * 3 - 2 * (3 ** 2)
463
+ assert x_grad.numpy() == 3 * (2 ** 2) * 3 - 3 ** 2
464
+ assert y_grad.numpy() == (2 ** 3) - 2 * 2 * 3
465
+
466
+ # To obtain a callable that returns the value of `f` and the gradient(s) of
467
+ # `f` with respect to a subset of its inputs, use the `params` keyword
468
+ # argument with `value_and_gradients_function()`.
469
+ val_ygrad_fn = tfe.value_and_gradients_function(f, params=[1])
470
+
471
+ f_val, (y_grad,) = val_ygrad_fn(x, y)
472
+ assert f_val.numpy() == (2 ** 3) * 3 - 2 * (3 ** 2)
473
+ assert y_grad.numpy() == (2 ** 3) - 2 * 2 * 3
474
+ ```
475
+
476
+ Args:
477
+ f: function to be differentiated. If `f` returns a scalar, this scalar will
478
+ be differentiated. If `f` returns a tensor or list of tensors, by default
479
+ a scalar will be computed by adding all their values to produce a single
480
+ scalar. If desired, the tensors can be elementwise multiplied by the
481
+ tensors passed as the `dy` keyword argument to the returned gradient
482
+ function.
483
+ params: list of parameter names of f or list of integers indexing the
484
+ parameters with respect to which we'll differentiate. Passing `None`
485
+ differentiates with respect to all parameters.
486
+
487
+ Returns:
488
+ function which, when called, returns the value of f and the gradient
489
+ of f with respect to all of `params`. The function takes an extra optional
490
+ keyword argument "dy". Setting it allows computation of vector jacobian
491
+ products for vectors other than the vector of ones.
492
+
493
+ Raises:
494
+ ValueError: if the params are not all strings or all integers.
495
+ """
496
+
497
+ def decorated(*args, **kwds):
498
+ """Computes the value and gradient of the decorated function."""
499
+ dy = kwds.pop("dy", None)
500
+ if kwds:
501
+ raise ValueError("Functions to be differentiated cannot "
502
+ "receive keyword arguments.")
503
+ val, vjp = make_vjp(f, params)(*args, **kwds)
504
+ return val, vjp(dy=dy)
505
+
506
+ return decorated
507
+
508
+
509
+ def make_vjp(f, params=None, persistent=True):
510
+ """Returns a function that computes f and its vjp w.r.t.
511
+
512
+ params.
513
+
514
+ The term "vjp" here is an abbreviation for vector-jacobian product.
515
+
516
+ Args:
517
+ f: the function to be differentiated.
518
+ params: the parameters (numbers or names) to differentiate with respect to.
519
+ A value of None will differentiate with respect to all parameters.
520
+ persistent: Boolean controlling whether the VJP function can be re-used.
521
+ Must be True or False.
522
+
523
+ Returns:
524
+ A function, which when called, returns a tuple (value, vjp), where:
525
+ - value is the result of calling f.
526
+ - vjp is a function, which takes a vector as an argument and
527
+ returns the product of that vector with the Jacobian of f.
528
+ Providing no argument to vjp is equivalent to providing a
529
+ vector of ones.
530
+
531
+ For example,
532
+ ```python
533
+ def f(x):
534
+ return x * x
535
+
536
+ wrapped_fn = tfe.make_vjp(f)
537
+ result, vjp = wrapped_fn(tf.constant(3.0))
538
+ # result is 9.0
539
+ vjp() # the vjp function returns 6.0
540
+
541
+ Raises:
542
+ ValueError: if `f` returns None.
543
+ """
544
+
545
+ def decorated(*args, **kwds):
546
+ """Computes the value and gradient of the decorated function."""
547
+ parameter_positions = _get_arg_spec(f, params, args)
548
+ assert not kwds, "The gradient function can't take keyword arguments."
549
+ this_tape = tape.push_new_tape(persistent=persistent)
550
+ try:
551
+ sources = []
552
+ args = [
553
+ ops.convert_to_tensor(arg) if i in parameter_positions else arg
554
+ for i, arg in enumerate(args)
555
+ ]
556
+ args = _ensure_unique_tensor_objects(parameter_positions, args)
557
+ for i in parameter_positions:
558
+ if getattr(args[i], "is_packed", False):
559
+ raise ValueError(
560
+ "GradientTape.gradient is not supported on packed EagerTensors"
561
+ "yet.")
562
+ sources.append(args[i])
563
+ tape.watch(this_tape, args[i])
564
+ result = f(*args)
565
+ if result is None:
566
+ raise ValueError("Cannot differentiate a function that returns None; "
567
+ "did you forget to return a value from {}?".format(
568
+ f.__name__))
569
+ flat_result = nest.flatten(result)
570
+ flat_result = [gen_array_ops.identity(x) for x in flat_result]
571
+ result = nest.pack_sequence_as(result, flat_result)
572
+ finally:
573
+ tape.pop_tape(this_tape)
574
+ def vjp(dy=None):
575
+ if dy is not None:
576
+ dy = [ops.convert_to_tensor(x) for x in nest.flatten(dy)]
577
+ return imperative_grad.imperative_grad(
578
+ this_tape, nest.flatten(result), sources, output_gradients=dy)
579
+
580
+ return result, vjp
581
+
582
+ return decorated
583
+
584
+
585
+ def _aggregate_grads(gradients):
586
+ """Aggregate gradients from multiple sources.
587
+
588
+ Args:
589
+ gradients: A list of 'Tensor' or 'IndexedSlices' gradients.
590
+
591
+ Returns:
592
+ If 'gradients' only has 'Tensor', returns an aggregated 'Tensor'.
593
+ Otherwise returns an aggregated 'IndexedSlices'.
594
+ """
595
+ assert gradients, "No gradients to aggregate"
596
+
597
+ if len(gradients) == 1:
598
+ return gradients[0]
599
+ if all(isinstance(g, tensor_lib.Tensor) for g in gradients):
600
+ return gen_math_ops.add_n(gradients)
601
+ else:
602
+ assert all(
603
+ isinstance(g, (tensor_lib.Tensor, indexed_slices.IndexedSlices))
604
+ for g in gradients)
605
+ return backprop_util.AggregateIndexedSlicesGradients(gradients)
606
+
607
+
608
+ def _num_elements(grad):
609
+ """The number of elements in the `grad` tensor."""
610
+ if isinstance(grad, tensor_lib.Tensor):
611
+ shape_tuple = grad._shape_tuple() # pylint: disable=protected-access
612
+ elif isinstance(grad, indexed_slices.IndexedSlices):
613
+ shape_tuple = grad.values._shape_tuple() # pylint: disable=protected-access
614
+ else:
615
+ raise ValueError("`grad` not a Tensor or IndexedSlices.")
616
+ if shape_tuple is None or None in shape_tuple:
617
+ return 0
618
+ return functools.reduce(operator.mul, shape_tuple, 1)
619
+
620
+
621
+ def _fast_fill(value, shape, dtype):
622
+ return array_ops.fill(
623
+ constant_op.constant(shape, dtype=dtypes.int32),
624
+ constant_op.constant(value, dtype=dtype))
625
+
626
+
627
+ def _zeros(shape, dtype):
628
+ """Helper to return (possibly cached) zero tensors in eager mode."""
629
+ # Note: variants will use _zeros_like
630
+ if dtype == dtypes.string or dtype == dtypes.resource:
631
+ return None
632
+
633
+ ctx = context.context()
634
+ if not ctx.executing_eagerly():
635
+ return array_ops.zeros(shape, dtype)
636
+
637
+ device = ctx.device_name
638
+
639
+ if tensor_util.is_tf_type(shape):
640
+ shape_key = shape.ref()
641
+ else:
642
+ shape_key = shape
643
+ cache_key = shape_key, dtype, device
644
+ cached = ctx.zeros_cache().get(cache_key)
645
+ if cached is None:
646
+ if dtypes.as_dtype(dtype).is_bool:
647
+ value = False
648
+ else:
649
+ value = 0
650
+ cached = _fast_fill(value, shape, dtype)
651
+ ctx.zeros_cache().put(cache_key, cached)
652
+ return cached
653
+
654
+
655
+ def _ones(shape, dtype):
656
+ as_dtype = dtypes.as_dtype(dtype)
657
+ if as_dtype == dtypes.string:
658
+ return None
659
+
660
+ if not context.executing_eagerly():
661
+ return array_ops.ones(shape, dtype)
662
+
663
+ if as_dtype.is_bool:
664
+ value = True
665
+ else:
666
+ value = 1
667
+
668
+ if shape == (): # pylint: disable=g-explicit-bool-comparison
669
+ return constant_op.constant(value, dtype=dtype)
670
+ return _fast_fill(value, shape, dtype)
671
+
672
+
673
+ _default_vspace = imperative_grad.VSpace(
674
+ num_elements_fn=_num_elements,
675
+ aggregate_fn=_aggregate_grads,
676
+ zeros_fn=_zeros,
677
+ ones_fn=_ones,
678
+ zeros_like_fn=default_gradient.zeros_like,
679
+ ones_like_fn=default_gradient.ones_like,
680
+ graph_shape_fn=gen_array_ops.shape)
681
+ pywrap_tfe.TFE_Py_RegisterVSpace(_default_vspace)
682
+
683
+
684
+ def _handle_or_self(x):
685
+ """Unwrap resource variable/ndarray to return tensors."""
686
+ if resource_variable_ops.is_resource_variable(x):
687
+ return x.handle
688
+ return x
689
+
690
+
691
+ def _extract_tensors_and_variables(tensor):
692
+ """Extracts tensors and variables from the input object."""
693
+ for obj in nest.flatten(tensor):
694
+ if _pywrap_utils.IsTensor(obj) or _pywrap_utils.IsVariable(obj):
695
+ yield obj
696
+ elif isinstance(obj, composite_tensor.CompositeTensor):
697
+ components = type_spec.type_spec_from_value(obj)._to_components(obj) # pylint: disable=protected-access
698
+ yield from _extract_tensors_and_variables(components)
699
+ else:
700
+ raise ValueError(f"Passed in object {obj} of type {type(obj).__name__!r}"
701
+ f", not tf.Tensor or tf.Variable or ExtensionType.")
702
+
703
+
704
+ @tf_export("GradientTape", "autodiff.GradientTape", v1=["GradientTape"])
705
+ class GradientTape:
706
+ """Record operations for automatic differentiation.
707
+
708
+ Operations are recorded if they are executed within this context manager and
709
+ at least one of their inputs is being "watched".
710
+
711
+ Trainable variables (created by `tf.Variable` or `tf.compat.v1.get_variable`,
712
+ where `trainable=True` is default in both cases) are automatically watched.
713
+ Tensors can be manually watched by invoking the `watch` method on this context
714
+ manager.
715
+
716
+ For example, consider the function `y = x * x`. The gradient at `x = 3.0` can
717
+ be computed as:
718
+
719
+ >>> x = tf.constant(3.0)
720
+ >>> with tf.GradientTape() as g:
721
+ ... g.watch(x)
722
+ ... y = x * x
723
+ >>> dy_dx = g.gradient(y, x)
724
+ >>> print(dy_dx)
725
+ tf.Tensor(6.0, shape=(), dtype=float32)
726
+
727
+ GradientTapes can be nested to compute higher-order derivatives. For example,
728
+
729
+ >>> x = tf.constant(5.0)
730
+ >>> with tf.GradientTape() as g:
731
+ ... g.watch(x)
732
+ ... with tf.GradientTape() as gg:
733
+ ... gg.watch(x)
734
+ ... y = x * x
735
+ ... dy_dx = gg.gradient(y, x) # dy_dx = 2 * x
736
+ >>> d2y_dx2 = g.gradient(dy_dx, x) # d2y_dx2 = 2
737
+ >>> print(dy_dx)
738
+ tf.Tensor(10.0, shape=(), dtype=float32)
739
+ >>> print(d2y_dx2)
740
+ tf.Tensor(2.0, shape=(), dtype=float32)
741
+
742
+ By default, the resources held by a GradientTape are released as soon as
743
+ GradientTape.gradient() method is called. To compute multiple gradients over
744
+ the same computation, create a persistent gradient tape. This allows multiple
745
+ calls to the gradient() method as resources are released when the tape object
746
+ is garbage collected. For example:
747
+
748
+ >>> x = tf.constant(3.0)
749
+ >>> with tf.GradientTape(persistent=True) as g:
750
+ ... g.watch(x)
751
+ ... y = x * x
752
+ ... z = y * y
753
+ >>> dz_dx = g.gradient(z, x) # (4*x^3 at x = 3)
754
+ >>> print(dz_dx)
755
+ tf.Tensor(108.0, shape=(), dtype=float32)
756
+ >>> dy_dx = g.gradient(y, x)
757
+ >>> print(dy_dx)
758
+ tf.Tensor(6.0, shape=(), dtype=float32)
759
+
760
+ By default GradientTape will automatically watch any trainable variables that
761
+ are accessed inside the context. If you want fine grained control over which
762
+ variables are watched you can disable automatic tracking by passing
763
+ `watch_accessed_variables=False` to the tape constructor:
764
+
765
+ >>> x = tf.Variable(2.0)
766
+ >>> w = tf.Variable(5.0)
767
+ >>> with tf.GradientTape(
768
+ ... watch_accessed_variables=False, persistent=True) as tape:
769
+ ... tape.watch(x)
770
+ ... y = x ** 2 # Gradients will be available for `x`.
771
+ ... z = w ** 3 # No gradients will be available as `w` isn't being watched.
772
+ >>> dy_dx = tape.gradient(y, x)
773
+ >>> print(dy_dx)
774
+ tf.Tensor(4.0, shape=(), dtype=float32)
775
+ >>> # No gradients will be available as `w` isn't being watched.
776
+ >>> dz_dw = tape.gradient(z, w)
777
+ >>> print(dz_dw)
778
+ None
779
+
780
+ Note that when using models you should ensure that your variables exist when
781
+ using `watch_accessed_variables=False`. Otherwise it's quite easy to make your
782
+ first iteration not have any gradients:
783
+
784
+ ```python
785
+ a = tf.keras.layers.Dense(32)
786
+ b = tf.keras.layers.Dense(32)
787
+
788
+ with tf.GradientTape(watch_accessed_variables=False) as tape:
789
+ tape.watch(a.variables) # Since `a.build` has not been called at this point
790
+ # `a.variables` will return an empty list and the
791
+ # tape will not be watching anything.
792
+ result = b(a(inputs))
793
+ tape.gradient(result, a.variables) # The result of this computation will be
794
+ # a list of `None`s since a's variables
795
+ # are not being watched.
796
+ ```
797
+
798
+ Note that only tensors with real or complex dtypes are differentiable.
799
+ """
800
+
801
+ def __init__(self, persistent=False, watch_accessed_variables=True):
802
+ """Creates a new GradientTape.
803
+
804
+ Args:
805
+ persistent: Boolean controlling whether a persistent gradient tape
806
+ is created. False by default, which means at most one call can
807
+ be made to the gradient() method on this object.
808
+ watch_accessed_variables: Boolean controlling whether the tape will
809
+ automatically `watch` any (trainable) variables accessed while the tape
810
+ is active. Defaults to True meaning gradients can be requested from any
811
+ result computed in the tape derived from reading a trainable `Variable`.
812
+ If False users must explicitly `watch` any `Variable`s they want to
813
+ request gradients from.
814
+ """
815
+ self._tape = None
816
+ self._persistent = persistent
817
+ self._watch_accessed_variables = watch_accessed_variables
818
+ self._watched_variables = ()
819
+ self._recording = False
820
+
821
+ def __enter__(self):
822
+ """Enters a context inside which operations are recorded on this tape."""
823
+ self._push_tape()
824
+ return self
825
+
826
+ def __exit__(self, typ, value, traceback):
827
+ """Exits the recording context, no further operations are traced."""
828
+ if self._recording:
829
+ self._pop_tape()
830
+
831
+ def _push_tape(self):
832
+ """Pushes a new tape onto the tape stack."""
833
+ if self._recording:
834
+ raise ValueError("Tape is still recording, This can happen if you try to "
835
+ "re-enter an already-active tape.")
836
+ if self._tape is None:
837
+ self._tape = tape.push_new_tape(
838
+ persistent=self._persistent,
839
+ watch_accessed_variables=self._watch_accessed_variables)
840
+ else:
841
+ tape.push_tape(self._tape)
842
+ self._recording = True
843
+
844
+ def _pop_tape(self):
845
+ if not self._recording:
846
+ raise ValueError("Tape is not recording.")
847
+ tape.pop_tape(self._tape)
848
+ self._recording = False
849
+
850
+ @tf_contextlib.contextmanager
851
+ def _ensure_recording(self):
852
+ """Ensures that this tape is recording."""
853
+ if not self._recording:
854
+ try:
855
+ self._push_tape()
856
+ yield
857
+ finally:
858
+ self._pop_tape()
859
+ else:
860
+ yield
861
+
862
+ # TODO(b/209081027): Add a variable in composite tensor test case after
863
+ # variables become composite tensors.
864
+ def watch(self, tensor):
865
+ """Ensures that `tensor` is being traced by this tape.
866
+
867
+ Args:
868
+ tensor: a Tensor/Variable or list of Tensors/Variables.
869
+
870
+ Raises:
871
+ ValueError: if it encounters something that is not a tensor.
872
+ """
873
+ for t in _extract_tensors_and_variables(tensor):
874
+ if not backprop_util.IsTrainable(t):
875
+ logging.log_first_n(
876
+ logging.WARN, "The dtype of the watched tensor must be "
877
+ "floating (e.g. tf.float32), got %r", 5, t.dtype)
878
+ if hasattr(t, "handle"):
879
+ # There are many variable-like objects, all of them currently have
880
+ # `handle` attribute that points to a tensor. If this changes,
881
+ # internals of watch_variable need to change as well.
882
+ tape.watch_variable(self._tape, t)
883
+ else:
884
+ tape.watch(self._tape, t)
885
+
886
+ @tf_contextlib.contextmanager
887
+ def stop_recording(self):
888
+ """Temporarily stops recording operations on this tape.
889
+
890
+ Operations executed while this context manager is active will not be
891
+ recorded on the tape. This is useful for reducing the memory used by tracing
892
+ all computations.
893
+
894
+ For example:
895
+
896
+ >>> x = tf.constant(4.0)
897
+ >>> with tf.GradientTape() as tape:
898
+ ... with tape.stop_recording():
899
+ ... y = x ** 2
900
+ >>> dy_dx = tape.gradient(y, x)
901
+ >>> print(dy_dx)
902
+ None
903
+
904
+ Yields:
905
+ None
906
+ Raises:
907
+ RuntimeError: if the tape is not currently recording.
908
+ """
909
+ if self._tape is None:
910
+ raise RuntimeError(
911
+ "Trying to stop recording a tape which is not recording.")
912
+ self._pop_tape()
913
+ try:
914
+ yield
915
+ finally:
916
+ self._push_tape()
917
+
918
+ def reset(self):
919
+ """Clears all information stored in this tape.
920
+
921
+ Equivalent to exiting and reentering the tape context manager with a new
922
+ tape. For example, the two following code blocks are equivalent:
923
+
924
+ ```
925
+ with tf.GradientTape() as t:
926
+ loss = loss_fn()
927
+ with tf.GradientTape() as t:
928
+ loss += other_loss_fn()
929
+ t.gradient(loss, ...) # Only differentiates other_loss_fn, not loss_fn
930
+
931
+
932
+ # The following is equivalent to the above
933
+ with tf.GradientTape() as t:
934
+ loss = loss_fn()
935
+ t.reset()
936
+ loss += other_loss_fn()
937
+ t.gradient(loss, ...) # Only differentiates other_loss_fn, not loss_fn
938
+ ```
939
+
940
+ This is useful if you don't want to exit the context manager for the tape,
941
+ or can't because the desired reset point is inside a control flow construct:
942
+
943
+ ```
944
+ with tf.GradientTape() as t:
945
+ loss = ...
946
+ if loss > k:
947
+ t.reset()
948
+ ```
949
+ """
950
+ self._pop_tape()
951
+ self._tape = None
952
+ self._push_tape()
953
+
954
+ def watched_variables(self):
955
+ """Returns variables watched by this tape in order of construction."""
956
+ if self._tape is not None:
957
+ self._watched_variables = self._tape.watched_variables()
958
+ return self._watched_variables
959
+
960
+ def gradient(self,
961
+ target,
962
+ sources,
963
+ output_gradients=None,
964
+ unconnected_gradients=UnconnectedGradients.NONE):
965
+ """Computes the gradient using operations recorded in context of this tape.
966
+
967
+ Note: Unless you set `persistent=True` a GradientTape can only be used to
968
+ compute one set of gradients (or jacobians).
969
+
970
+ In addition to Tensors, gradient also supports RaggedTensors. For example,
971
+
972
+ >>> x = tf.ragged.constant([[1.0, 2.0], [3.0]])
973
+ >>> with tf.GradientTape() as g:
974
+ ... g.watch(x)
975
+ ... y = x * x
976
+ >>> g.gradient(y, x)
977
+ <tf.RaggedTensor [[2.0, 4.0], [6.0]]>
978
+
979
+ Args:
980
+ target: a list or nested structure of Tensors or Variables or
981
+ CompositeTensors to be differentiated.
982
+ sources: a list or nested structure of Tensors or Variables or
983
+ CompositeTensors. `target` will be differentiated against elements in
984
+ `sources`.
985
+ output_gradients: a list of gradients, one for each differentiable
986
+ element of target. Defaults to None.
987
+ unconnected_gradients: a value which can either hold 'none' or 'zero' and
988
+ alters the value which will be returned if the target and sources are
989
+ unconnected. The possible values and effects are detailed in
990
+ 'UnconnectedGradients' and it defaults to 'none'.
991
+
992
+ Returns:
993
+ a list or nested structure of Tensors (or IndexedSlices, or None, or
994
+ CompositeTensor), one for each element in `sources`. Returned structure
995
+ is the same as the structure of `sources`.
996
+
997
+ Raises:
998
+ RuntimeError: If called on a used, non-persistent tape.
999
+ RuntimeError: If called inside the context of the tape.
1000
+ TypeError: If the target is a None object.
1001
+ ValueError: If the target is a variable or if unconnected gradients is
1002
+ called with an unknown value.
1003
+ """
1004
+ if self._tape is None:
1005
+ raise RuntimeError("A non-persistent GradientTape can only be used to "
1006
+ "compute one set of gradients (or jacobians)")
1007
+ if self._recording:
1008
+ if not self._persistent:
1009
+ self._pop_tape()
1010
+ else:
1011
+ logging.log_first_n(
1012
+ logging.WARN, "Calling GradientTape.gradient on a persistent "
1013
+ "tape inside its context is significantly less "
1014
+ "efficient than calling it outside the context (it "
1015
+ "causes the gradient ops to be recorded on the "
1016
+ "tape, leading to increased CPU and memory usage). "
1017
+ "Only call GradientTape.gradient inside the "
1018
+ "context if you actually want to trace the "
1019
+ "gradient in order to compute higher order "
1020
+ "derivatives.", 1)
1021
+
1022
+ if target is None:
1023
+ raise TypeError("Argument `target` should be a list or nested structure"
1024
+ " of Tensors, Variables or CompositeTensors to be "
1025
+ "differentiated, but received None.")
1026
+
1027
+ flat_targets = composite_tensor_gradient.get_flat_tensors_for_gradients(
1028
+ nest.flatten(target))
1029
+ # TODO(b/246997907): Remove this once
1030
+ # ResourceVariableGradient.get_gradient_components returns the handle.
1031
+ flat_targets = nest.map_structure(_handle_or_self, flat_targets)
1032
+
1033
+ for t in flat_targets:
1034
+ if not backprop_util.IsTrainable(t):
1035
+ logging.vlog(
1036
+ 1, "The dtype of the target tensor must be "
1037
+ "floating (e.g. tf.float32) when calling GradientTape.gradient, "
1038
+ "got %r", t.dtype)
1039
+
1040
+ flat_sources_raw = nest.flatten(sources)
1041
+ flat_sources = []
1042
+ for t in flat_sources_raw:
1043
+ flat_sources.append(_handle_or_self(t))
1044
+ flat_sources = composite_tensor_gradient.get_flat_tensors_for_gradients(
1045
+ flat_sources)
1046
+ for t in flat_sources:
1047
+ if not backprop_util.IsTrainable(t):
1048
+ logging.vlog(
1049
+ 1, "The dtype of the source tensor must be "
1050
+ "floating (e.g. tf.float32) when calling GradientTape.gradient, "
1051
+ "got %r", t.dtype)
1052
+ if getattr(t, "is_packed", False):
1053
+ raise ValueError(
1054
+ "GradientTape.gradient is not supported on packed EagerTensors yet."
1055
+ )
1056
+
1057
+ if output_gradients is not None:
1058
+ output_gradients = nest.flatten(
1059
+ variable_utils.convert_variables_to_tensors(output_gradients))
1060
+ output_gradients = (
1061
+ composite_tensor_gradient.get_flat_tensors_for_gradients(
1062
+ output_gradients))
1063
+ output_gradients = [None if x is None else ops.convert_to_tensor(x)
1064
+ for x in output_gradients]
1065
+
1066
+ flat_grad = imperative_grad.imperative_grad(
1067
+ self._tape,
1068
+ flat_targets,
1069
+ flat_sources,
1070
+ output_gradients=output_gradients,
1071
+ sources_raw=flat_sources_raw,
1072
+ unconnected_gradients=unconnected_gradients)
1073
+
1074
+ if not self._persistent:
1075
+ # Keep track of watched variables before setting tape to None
1076
+ self._watched_variables = self._tape.watched_variables()
1077
+ self._tape = None
1078
+
1079
+ flat_sources_raw = nest.map_structure(_handle_or_self, flat_sources_raw)
1080
+ flat_grad = composite_tensor_gradient.replace_flat_tensors_for_gradients(
1081
+ flat_sources_raw, flat_grad)
1082
+ grad = nest.pack_sequence_as(sources, flat_grad)
1083
+ return grad
1084
+
1085
+ def jacobian(self,
1086
+ target,
1087
+ sources,
1088
+ unconnected_gradients=UnconnectedGradients.NONE,
1089
+ parallel_iterations=None,
1090
+ experimental_use_pfor=True):
1091
+ """Computes the jacobian using operations recorded in context of this tape.
1092
+
1093
+ Note: Unless you set `persistent=True` a GradientTape can only be used to
1094
+ compute one set of gradients (or jacobians).
1095
+
1096
+ Note: By default the jacobian implementation uses parallel for (pfor), which
1097
+ creates a tf.function under the hood for each jacobian call. For better
1098
+ performance, and to avoid recompilation and vectorization rewrites on each
1099
+ call, enclose GradientTape code in @tf.function.
1100
+
1101
+ See[wikipedia
1102
+ article](http://en.wikipedia.org/wiki/jacobian_matrix_and_determinant)
1103
+ for the definition of a Jacobian.
1104
+
1105
+ Example usage:
1106
+
1107
+ ```python
1108
+ with tf.GradientTape() as g:
1109
+ x = tf.constant([1.0, 2.0])
1110
+ g.watch(x)
1111
+ y = x * x
1112
+ jacobian = g.jacobian(y, x)
1113
+ # jacobian value is [[2., 0.], [0., 4.]]
1114
+ ```
1115
+
1116
+ Args:
1117
+ target: Tensor to be differentiated.
1118
+ sources: a list or nested structure of Tensors or Variables. `target`
1119
+ will be differentiated against elements in `sources`.
1120
+ unconnected_gradients: a value which can either hold 'none' or 'zero' and
1121
+ alters the value which will be returned if the target and sources are
1122
+ unconnected. The possible values and effects are detailed in
1123
+ 'UnconnectedGradients' and it defaults to 'none'.
1124
+ parallel_iterations: A knob to control how many iterations are dispatched
1125
+ in parallel. This knob can be used to control the total memory usage.
1126
+ experimental_use_pfor: If true, vectorizes the jacobian computation. Else
1127
+ falls back to a sequential while_loop. Vectorization can sometimes fail
1128
+ or lead to excessive memory usage. This option can be used to disable
1129
+ vectorization in such cases.
1130
+
1131
+ Returns:
1132
+ A list or nested structure of Tensors (or None), one for each element in
1133
+ `sources`. Returned structure is the same as the structure of `sources`.
1134
+ Note if any gradient is sparse (IndexedSlices), jacobian function
1135
+ currently makes it dense and returns a Tensor instead. This may change in
1136
+ the future.
1137
+
1138
+
1139
+ Raises:
1140
+ RuntimeError: If called on a used, non-persistent tape.
1141
+ RuntimeError: If called on a non-persistent tape with eager execution
1142
+ enabled and without enabling experimental_use_pfor.
1143
+ ValueError: If vectorization of jacobian computation fails.
1144
+ """
1145
+ if self._tape is None:
1146
+ raise RuntimeError("A non-persistent GradientTape can only be used to "
1147
+ "compute one set of gradients (or jacobians)")
1148
+
1149
+ flat_sources = nest.flatten(sources)
1150
+ target_static_shape = target.shape
1151
+ target_shape = array_ops.shape(target)
1152
+ # Note that we push and pop the tape here and below. This is needed since we
1153
+ # need gradients through the enclosed operations.
1154
+ with self._ensure_recording():
1155
+ target = array_ops.reshape(target, [-1])
1156
+
1157
+ def loop_fn(i):
1158
+ with self._ensure_recording():
1159
+ y = array_ops.gather(target, i)
1160
+ return self.gradient(y, flat_sources,
1161
+ unconnected_gradients=unconnected_gradients)
1162
+
1163
+ try:
1164
+ target_size = int(target.shape[0])
1165
+ except TypeError:
1166
+ target_size = array_ops.shape(target)[0]
1167
+
1168
+ if experimental_use_pfor:
1169
+ try:
1170
+ output = pfor_ops.pfor(loop_fn, target_size,
1171
+ parallel_iterations=parallel_iterations)
1172
+ except ValueError as err:
1173
+ raise ValueError(
1174
+ "Encountered an exception while vectorizing the "
1175
+ "jacobian computation. Vectorization can be disabled by setting"
1176
+ " experimental_use_pfor to False.") from err
1177
+ else:
1178
+ if context.executing_eagerly() and not self._persistent:
1179
+ raise RuntimeError(
1180
+ "GradientTape must be created with persistent=True"
1181
+ " to compute the jacobian with eager execution enabled and with "
1182
+ " experimental_use_pfor set to False.")
1183
+ output = pfor_ops.for_loop(
1184
+ loop_fn, [target.dtype] * len(flat_sources), target_size,
1185
+ parallel_iterations=parallel_iterations)
1186
+
1187
+ for i, out in enumerate(output):
1188
+ if out is not None:
1189
+ new_shape = array_ops.concat(
1190
+ [target_shape, array_ops.shape(out)[1:]], axis=0)
1191
+ out = array_ops.reshape(out, new_shape)
1192
+ if context.executing_eagerly():
1193
+ out.set_shape(target_static_shape.concatenate(flat_sources[i].shape))
1194
+ output[i] = out
1195
+
1196
+ return nest.pack_sequence_as(sources, output)
1197
+
1198
+ def batch_jacobian(self,
1199
+ target,
1200
+ source,
1201
+ unconnected_gradients=UnconnectedGradients.NONE,
1202
+ parallel_iterations=None,
1203
+ experimental_use_pfor=True):
1204
+ """Computes and stacks per-example jacobians.
1205
+
1206
+ See [wikipedia article](http://en.wikipedia.org/wiki/jacobian_matrix_and_determinant)
1207
+ for the definition of a Jacobian. This function is essentially an efficient
1208
+ implementation of the following:
1209
+
1210
+ `tf.stack([self.jacobian(y[i], x[i]) for i in range(x.shape[0])])`.
1211
+
1212
+ Note that compared to `GradientTape.jacobian` which computes gradient of
1213
+ each output value w.r.t each input value, this function is useful when
1214
+ `target[i,...]` is independent of `source[j,...]` for `j != i`. This
1215
+ assumption allows more efficient computation as compared to
1216
+ `GradientTape.jacobian`. The output, as well as intermediate activations,
1217
+ are lower dimensional and avoid a bunch of redundant zeros which would
1218
+ result in the jacobian computation given the independence assumption.
1219
+
1220
+ Note: Unless you set `persistent=True` a GradientTape can only be used to
1221
+ compute one set of gradients (or jacobians).
1222
+
1223
+ Note: By default the batch_jacobian implementation uses parallel for (pfor),
1224
+ which creates a tf.function under the hood for each batch_jacobian call.
1225
+ For better performance, and to avoid recompilation and vectorization
1226
+ rewrites on each call, enclose GradientTape code in @tf.function.
1227
+
1228
+
1229
+ Example usage:
1230
+
1231
+ ```python
1232
+ with tf.GradientTape() as g:
1233
+ x = tf.constant([[1., 2.], [3., 4.]], dtype=tf.float32)
1234
+ g.watch(x)
1235
+ y = x * x
1236
+ batch_jacobian = g.batch_jacobian(y, x)
1237
+ # batch_jacobian is [[[2, 0], [0, 4]], [[6, 0], [0, 8]]]
1238
+ ```
1239
+
1240
+ Args:
1241
+ target: A tensor with rank 2 or higher and with shape [b, y1, ..., y_n].
1242
+ `target[i,...]` should only depend on `source[i,...]`.
1243
+ source: A tensor with rank 2 or higher and with shape [b, x1, ..., x_m].
1244
+ unconnected_gradients: a value which can either hold 'none' or 'zero' and
1245
+ alters the value which will be returned if the target and sources are
1246
+ unconnected. The possible values and effects are detailed in
1247
+ 'UnconnectedGradients' and it defaults to 'none'.
1248
+ parallel_iterations: A knob to control how many iterations are dispatched
1249
+ in parallel. This knob can be used to control the total memory usage.
1250
+ experimental_use_pfor: If true, uses pfor for computing the Jacobian. Else
1251
+ uses a tf.while_loop.
1252
+
1253
+ Returns:
1254
+ A tensor `t` with shape [b, y_1, ..., y_n, x1, ..., x_m] where `t[i, ...]`
1255
+ is the jacobian of `target[i, ...]` w.r.t. `source[i, ...]`, i.e. stacked
1256
+ per-example jacobians.
1257
+
1258
+ Raises:
1259
+ RuntimeError: If called on a used, non-persistent tape.
1260
+ RuntimeError: If called on a non-persistent tape with eager execution
1261
+ enabled and without enabling experimental_use_pfor.
1262
+ ValueError: If vectorization of jacobian computation fails or if first
1263
+ dimension of `target` and `source` do not match.
1264
+ """
1265
+ if self._tape is None:
1266
+ raise RuntimeError("A non-persistent GradientTape can only be used to"
1267
+ "compute one set of gradients (or jacobians)")
1268
+ target_shape = target.shape
1269
+ if target_shape.rank is None:
1270
+ dim = tensor_shape.Dimension(None)
1271
+ else:
1272
+ dim = target_shape.dims[0]
1273
+ if not (target_shape.with_rank_at_least(2) and
1274
+ source.shape.with_rank_at_least(2) and
1275
+ dim.is_compatible_with(source.shape[0])):
1276
+ raise ValueError(
1277
+ "Need first dimension of target shape (%s) and "
1278
+ "source shape (%s) to match." % (target.shape, source.shape))
1279
+ if target_shape.is_fully_defined():
1280
+ batch_size = int(target_shape[0])
1281
+ target_row_size = target_shape.num_elements() // batch_size
1282
+ else:
1283
+ target_shape = array_ops.shape(target)
1284
+ batch_size = target_shape[0]
1285
+ target_row_size = array_ops.size(target) // batch_size
1286
+ source_shape = array_ops.shape(source)
1287
+ # Flatten target to 2-D.
1288
+ # Note that we push and pop the tape here and below. This is needed since we
1289
+ # need gradients through the enclosed operations.
1290
+ with self._ensure_recording():
1291
+ with ops.control_dependencies(
1292
+ [check_ops.assert_equal(batch_size, source_shape[0])]):
1293
+ target = array_ops.reshape(target, [batch_size, target_row_size])
1294
+
1295
+ run_once = False
1296
+
1297
+ def loop_fn(i):
1298
+ nonlocal run_once
1299
+ if run_once and not self._persistent:
1300
+ if parallel_iterations is not None:
1301
+ raise RuntimeError(
1302
+ "GradientTape must be created with persistent=True"
1303
+ " to compute the batch_jacobian with parallel_iterations.")
1304
+ else:
1305
+ raise RuntimeError(
1306
+ "GradientTape must be created with persistent=True"
1307
+ " to compute the batch_jacobian.")
1308
+ run_once = True
1309
+
1310
+ with self._ensure_recording():
1311
+ y = array_ops.gather(target, i, axis=1)
1312
+ return self.gradient(y, source,
1313
+ unconnected_gradients=unconnected_gradients)
1314
+
1315
+ if experimental_use_pfor:
1316
+ try:
1317
+ output = pfor_ops.pfor(loop_fn, target_row_size,
1318
+ parallel_iterations=parallel_iterations)
1319
+ except ValueError as err:
1320
+ raise ValueError(
1321
+ "Encountered an exception while vectorizing the "
1322
+ "batch_jacobian computation. Vectorization can be disabled by "
1323
+ "setting experimental_use_pfor to False.") from err
1324
+ else:
1325
+ if context.executing_eagerly() and not self._persistent:
1326
+ raise RuntimeError(
1327
+ "GradientTape must be created with persistent=True"
1328
+ " to compute the batch_jacobian with eager execution enabled and "
1329
+ " with experimental_use_pfor set to False.")
1330
+ output = pfor_ops.for_loop(loop_fn, target.dtype, target_row_size,
1331
+ parallel_iterations=parallel_iterations)
1332
+ new_shape = array_ops.concat([target_shape, source_shape[1:]], axis=0)
1333
+ if output is None:
1334
+ # Note that this block is returning zeros when it could use `None` to
1335
+ # represent unconnected gradients. This is to maintain compatibility with
1336
+ # the previous behavior, which ignored `unconnected_gradients`.
1337
+ output = array_ops.zeros(new_shape, target.dtype)
1338
+ return output
1339
+ else:
1340
+ output = array_ops.reshape(output,
1341
+ [target_row_size, batch_size, -1])
1342
+ output = array_ops.transpose(output, [1, 0, 2])
1343
+
1344
+ output = array_ops.reshape(output, new_shape)
1345
+ return output
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/backprop_util.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Shared utilities related to backprop."""
16
+
17
+ from tensorflow.core.config import flags
18
+ from tensorflow.core.framework import types_pb2
19
+ from tensorflow.python.framework import dtypes
20
+ from tensorflow.python.framework import indexed_slices
21
+ from tensorflow.python.framework import ops
22
+ from tensorflow.python.framework import tensor as tensor_lib
23
+ from tensorflow.python.framework import tensor_util
24
+ from tensorflow.python.ops import array_ops
25
+ from tensorflow.python.ops import handle_data_util
26
+ from tensorflow.python.ops import math_ops
27
+
28
+
29
+ def _DTypeFromTensor(tensor):
30
+ """Extract either `tensor.dtype` or the unanimous sub-type of a variant."""
31
+ dtype = tensor.dtype
32
+ if dtype.base_dtype == dtypes.variant:
33
+ # If we know statically that the data a variant points to is non-trainable
34
+ # then the variant itself is non-trainable.
35
+ if isinstance(tensor, ops.EagerTensor):
36
+ handle_data = tensor._handle_data # pylint: disable=protected-access
37
+ else:
38
+ handle_data = handle_data_util.get_resource_handle_data(tensor)
39
+ if (handle_data is not None
40
+ and handle_data.is_set
41
+ and handle_data.shape_and_type):
42
+ first_type = handle_data.shape_and_type[0].dtype
43
+ # Some variants have statically unknown dtypes; we can't make inferences
44
+ # about trainability, so we conservatively assume they're trainable
45
+ # (which may waste memory passing zeros around, but will be correct).
46
+ if (first_type != types_pb2.DT_INVALID
47
+ and all(shape_and_type.dtype == first_type
48
+ for shape_and_type in handle_data.shape_and_type)):
49
+ return first_type
50
+ return dtype
51
+
52
+
53
+ def IsTrainable(tensor_or_dtype):
54
+ """Determines whether a tensor or dtype supports infinitesimal changes."""
55
+ if tensor_util.is_tf_type(tensor_or_dtype):
56
+ dtype = _DTypeFromTensor(tensor_or_dtype)
57
+ else:
58
+ dtype = tensor_or_dtype
59
+ dtype = dtypes.as_dtype(dtype)
60
+ trainable_dtypes = [dtypes.float16, dtypes.float32, dtypes.float64,
61
+ dtypes.complex64, dtypes.complex128, dtypes.resource,
62
+ dtypes.variant, dtypes.bfloat16]
63
+ if flags.config().enable_quantized_dtypes_training.value():
64
+ trainable_dtypes.extend([dtypes.qint8, dtypes.qint16, dtypes.qint32,
65
+ dtypes.quint8, dtypes.quint16])
66
+ return dtype.base_dtype in trainable_dtypes
67
+
68
+
69
+ def FlattenNestedIndexedSlices(grad):
70
+ assert isinstance(grad, indexed_slices.IndexedSlices)
71
+ if isinstance(grad.values, tensor_lib.Tensor):
72
+ return grad
73
+ else:
74
+ assert isinstance(grad.values, indexed_slices.IndexedSlices)
75
+ g = FlattenNestedIndexedSlices(grad.values)
76
+ return indexed_slices.IndexedSlices(
77
+ g.values, array_ops.gather(grad.indices, g.indices), g.dense_shape)
78
+
79
+
80
+ def AggregateIndexedSlicesGradients(grads):
81
+ """Aggregates gradients containing `IndexedSlices`s."""
82
+ if len(grads) < 1:
83
+ return None
84
+ if len(grads) == 1:
85
+ return grads[0]
86
+ grads = [g for g in grads if g is not None]
87
+ # If any gradient is a `Tensor`, sum them up and return a dense tensor
88
+ # object.
89
+ if any(isinstance(g, tensor_lib.Tensor) for g in grads):
90
+ return math_ops.add_n(grads)
91
+
92
+ # The following `_as_indexed_slices_list` casts ids of IndexedSlices into
93
+ # int64. It is to make sure the inputs of `concat` all have same the data
94
+ # type.
95
+ grads = math_ops._as_indexed_slices_list(grads) # pylint: disable=protected-access
96
+
97
+ grads = [FlattenNestedIndexedSlices(x) for x in grads]
98
+ # Form IndexedSlices out of the concatenated values and indices.
99
+ concat_grad = indexed_slices.IndexedSlices(
100
+ array_ops.concat([x.values for x in grads], axis=0),
101
+ array_ops.concat([x.indices for x in grads], axis=0),
102
+ grads[0].dense_shape)
103
+
104
+ return concat_grad
105
+
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/benchmarks_test_base.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ r"""Benchmark base to run and report benchmark results."""
16
+
17
+ import os
18
+ import uuid
19
+
20
+ from tensorflow.python.eager import test
21
+ from tensorflow.python.platform import flags
22
+ from tensorflow.python.profiler import profiler_v2 as profiler
23
+
24
+ flags.DEFINE_bool("xprof", False, "Run and report benchmarks with xprof on")
25
+ flags.DEFINE_string("logdir", "/tmp/xprof/", "Directory to store xprof data")
26
+
27
+
28
+ class MicroBenchmarksBase(test.Benchmark):
29
+ """Run and report benchmark results.
30
+
31
+ The first run is without any profilng.
32
+ Second run is with xprof and python trace. Third run is with xprof without
33
+ python trace. Note: xprof runs are with fewer iterations.
34
+ """
35
+
36
+ def run_with_xprof(self, enable_python_trace, run_benchmark, func,
37
+ num_iters_xprof, execution_mode, suid):
38
+ if enable_python_trace:
39
+ options = profiler.ProfilerOptions(python_tracer_level=1)
40
+ logdir = os.path.join(flags.FLAGS.logdir, suid + "_with_python")
41
+ else:
42
+ options = profiler.ProfilerOptions(python_tracer_level=0)
43
+ logdir = os.path.join(flags.FLAGS.logdir, suid)
44
+ with profiler.Profile(logdir, options):
45
+ total_time = run_benchmark(func, num_iters_xprof, execution_mode)
46
+ us_per_example = float("{0:.3f}".format(total_time * 1e6 / num_iters_xprof))
47
+ return logdir, us_per_example
48
+
49
+ def run_report(self, run_benchmark, func, num_iters, execution_mode=None):
50
+ """Run and report benchmark results."""
51
+ total_time = run_benchmark(func, num_iters, execution_mode)
52
+ mean_us = total_time * 1e6 / num_iters
53
+ extras = {
54
+ "examples_per_sec": float("{0:.3f}".format(num_iters / total_time)),
55
+ "us_per_example": float("{0:.3f}".format(total_time * 1e6 / num_iters))
56
+ }
57
+
58
+ if flags.FLAGS.xprof:
59
+ suid = str(uuid.uuid4())
60
+ # Re-run with xprof and python trace.
61
+ num_iters_xprof = min(100, num_iters)
62
+ xprof_link, us_per_example = self.run_with_xprof(True, run_benchmark,
63
+ func, num_iters_xprof,
64
+ execution_mode, suid)
65
+ extras["xprof link with python trace"] = xprof_link
66
+ extras["us_per_example with xprof and python"] = us_per_example
67
+
68
+ # Re-run with xprof but no python trace.
69
+ xprof_link, us_per_example = self.run_with_xprof(False, run_benchmark,
70
+ func, num_iters_xprof,
71
+ execution_mode, suid)
72
+ extras["xprof link"] = xprof_link
73
+ extras["us_per_example with xprof"] = us_per_example
74
+
75
+ benchmark_name = self._get_benchmark_name()
76
+ self.report_benchmark(
77
+ iters=num_iters, wall_time=mean_us, extras=extras, name=benchmark_name)
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/cancellation.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Cancellation support for eager execution."""
16
+
17
+ from tensorflow.python import pywrap_tfe
18
+
19
+
20
+ class CancellationManager(object):
21
+ """A mechanism for cancelling blocking computation."""
22
+
23
+ __slots__ = ["_impl"]
24
+
25
+ def __init__(self):
26
+ self._impl = pywrap_tfe.TFE_NewCancellationManager()
27
+
28
+ @property
29
+ def is_cancelled(self):
30
+ """Returns `True` if `CancellationManager.start_cancel` has been called."""
31
+ return pywrap_tfe.TFE_CancellationManagerIsCancelled(self._impl)
32
+
33
+ def start_cancel(self):
34
+ """Cancels blocking operations that have been registered with this object."""
35
+ pywrap_tfe.TFE_CancellationManagerStartCancel(self._impl)
36
+
37
+ def get_cancelable_function(self, concrete_function):
38
+ def cancellable(*args, **kwargs):
39
+ with CancellationManagerContext(self):
40
+ return concrete_function(*args, **kwargs)
41
+ return cancellable
42
+
43
+ _active_context = None
44
+
45
+
46
+ def context():
47
+ return _active_context
48
+
49
+
50
+ class CancellationManagerContext:
51
+ """A Python context for wrapping a cancellable ConcreteFunction."""
52
+
53
+ def __init__(self, cancellation_manager):
54
+ self._cancellation_manager = cancellation_manager
55
+
56
+ def __enter__(self):
57
+ global _active_context
58
+ _active_context = self._cancellation_manager
59
+
60
+ def __exit__(self, exc_type, exc_value, exc_tb):
61
+ global _active_context
62
+ _active_context = None
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/context.py ADDED
The diff for this file is too large to render. See raw diff
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/core.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Experimental API for TensorFlow's "Eager" mode of execution."""
16
+
17
+ from tensorflow.python import pywrap_tfe
18
+ from tensorflow.python.framework import errors
19
+ from tensorflow.python.platform import tf_logging as logging
20
+
21
+ # Trace of execution and memory usage.
22
+ _active_trace = None
23
+
24
+
25
+ def _status_to_exception(status):
26
+ try:
27
+ error_class = errors.exception_type_from_error_code(status.code)
28
+ e = error_class(None, None, status.message, status.payloads)
29
+ logging.error_log("%s: %s" % (e.__class__.__name__, e))
30
+ return e
31
+ except KeyError:
32
+ e = errors.UnknownError(
33
+ None, None, status.message, status.code, status.payloads
34
+ )
35
+ logging.error_log("%s: %s" % (e.__class__.__name__, e))
36
+ return e
37
+
38
+
39
+ class _NotOkStatusException(Exception):
40
+ """Exception class to handle not ok Status."""
41
+
42
+ def __init__(self, message, code, payloads):
43
+ super(_NotOkStatusException, self).__init__()
44
+ self.message = message
45
+ self.code = code
46
+ self.payloads = payloads
47
+
48
+ def __str__(self):
49
+ e = _status_to_exception(self)
50
+ return "%s: %s" % (e.__class__.__name__, e)
51
+
52
+
53
+ pywrap_tfe.TFE_Py_RegisterExceptionClass(_NotOkStatusException)
54
+
55
+
56
+ class _FallbackException(Exception):
57
+ """Exception class to handle fallback from the fastpath.
58
+
59
+ The fastpath that we refer to here is the one implemented to reduce per-op
60
+ overheads (TFE_Py_FastPathExecute_C). If the conditions for executing the op
61
+ on the fastpath are not met, we fallback to a safer (and more complete)
62
+ slowpath, and this Exception is raised to signal that transition.
63
+ """
64
+ pass
65
+
66
+
67
+ class _SymbolicException(Exception):
68
+ """Exception class to handle use of symbolic tensors when executing eagerly.
69
+
70
+ `keras.Input()` creates symbolic tensors (in a FuncGraph managed by the
71
+ Keras backend) while in eager execution. This exception is used to
72
+ identify this case (raised in `convert_to_tensor` cause generated functions
73
+ for ops to construct graphs instead of executing the kernel).
74
+ """
75
+ pass
76
+
77
+
78
+ pywrap_tfe.TFE_Py_RegisterFallbackExceptionClass(_FallbackException)
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/def_function.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Supports old symbols supplied by this file while the code is refactored."""
16
+
17
+ # pylint:disable=unused-import,g-bad-import-order
18
+
19
+ # Config Options
20
+ from tensorflow.python.eager.polymorphic_function.eager_function_run import run_functions_eagerly
21
+ from tensorflow.python.eager.polymorphic_function.eager_function_run import functions_run_eagerly
22
+
23
+ # tf.function Classes
24
+ from tensorflow.python.eager.polymorphic_function.polymorphic_function import Function
25
+ from tensorflow.python.eager.polymorphic_function.polymorphic_function import function
26
+
27
+ # Private attributes
28
+ from tensorflow.python.eager.polymorphic_function.polymorphic_function import _tf_function_counter
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/execute.py ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Functions called by the generated code to execute an eager-mode op."""
16
+
17
+ from google.protobuf import text_format
18
+ from tensorflow.core.framework import tensor_pb2
19
+ from tensorflow.python import pywrap_tfe
20
+ from tensorflow.python.eager import core
21
+ from tensorflow.python.framework import dtypes
22
+ from tensorflow.python.framework import tensor_conversion_registry
23
+ from tensorflow.python.framework import tensor_shape
24
+ from tensorflow.python.types import core as core_types
25
+ from tensorflow.python.util import compat
26
+
27
+
28
+ def quick_execute(op_name, num_outputs, inputs, attrs, ctx, name=None):
29
+ """Execute a TensorFlow operation.
30
+
31
+ Args:
32
+ op_name: Name of the TensorFlow operation (see REGISTER_OP in C++ code) to
33
+ execute.
34
+ num_outputs: The number of outputs of the operation to fetch. (Explicitly
35
+ provided instead of being inferred for performance reasons).
36
+ inputs: A list of inputs to the operation. Each entry should be a Tensor, or
37
+ a value which can be passed to the Tensor constructor to create one.
38
+ attrs: A tuple with alternating string attr names and attr values for this
39
+ operation.
40
+ ctx: The value of context.context().
41
+ name: Customized name for the operation.
42
+
43
+ Returns:
44
+ List of output Tensor objects. The list is empty if there are no outputs
45
+
46
+ Raises:
47
+ An exception on error.
48
+ """
49
+ device_name = ctx.device_name
50
+ # pylint: disable=protected-access
51
+ try:
52
+ ctx.ensure_initialized()
53
+ tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
54
+ inputs, attrs, num_outputs)
55
+ except core._NotOkStatusException as e:
56
+ if name is not None:
57
+ e.message += " name: " + name
58
+ raise core._status_to_exception(e) from None
59
+ except TypeError as e:
60
+ keras_symbolic_tensors = [x for x in inputs if _is_keras_symbolic_tensor(x)]
61
+ if keras_symbolic_tensors:
62
+ raise core._SymbolicException(
63
+ "Inputs to eager execution function cannot be Keras symbolic "
64
+ "tensors, but found {}".format(keras_symbolic_tensors))
65
+ raise e
66
+ # pylint: enable=protected-access
67
+ return tensors
68
+
69
+
70
+ def execute_with_cancellation(op_name,
71
+ num_outputs,
72
+ inputs,
73
+ attrs,
74
+ ctx,
75
+ cancellation_manager,
76
+ name=None):
77
+ """Execute a TensorFlow operation.
78
+
79
+ Args:
80
+ op_name: Name of the TensorFlow operation (see REGISTER_OP in C++ code) to
81
+ execute.
82
+ num_outputs: The number of outputs of the operation to fetch. (Explicitly
83
+ provided instead of being inferred for performance reasons).
84
+ inputs: A list of inputs to the operation. Each entry should be a Tensor, or
85
+ a value which can be passed to the Tensor constructor to create one.
86
+ attrs: A tuple with alternating string attr names and attr values for this
87
+ operation.
88
+ ctx: The value of context.context().
89
+ cancellation_manager: a `CancellationManager` object that can be used to
90
+ cancel the operation.
91
+ name: Customized name for the operation.
92
+
93
+ Returns:
94
+ List of output Tensor objects. The list is empty if there are no outputs
95
+
96
+ Raises:
97
+ An exception on error.
98
+ """
99
+ device_name = ctx.device_name
100
+ # pylint: disable=protected-access
101
+ try:
102
+ ctx.ensure_initialized()
103
+ tensors = pywrap_tfe.TFE_Py_ExecuteCancelable(ctx._handle, device_name,
104
+ op_name, inputs, attrs,
105
+ cancellation_manager._impl,
106
+ num_outputs)
107
+ except core._NotOkStatusException as e:
108
+ if name is not None:
109
+ e.message += " name: " + name
110
+ raise core._status_to_exception(e) from None
111
+ except TypeError as e:
112
+ keras_symbolic_tensors = [x for x in inputs if _is_keras_symbolic_tensor(x)]
113
+ if keras_symbolic_tensors:
114
+ raise core._SymbolicException(
115
+ "Inputs to eager execution function cannot be Keras symbolic "
116
+ "tensors, but found {}".format(keras_symbolic_tensors))
117
+ raise e
118
+ # pylint: enable=protected-access
119
+ return tensors
120
+
121
+
122
+ def execute_with_callbacks(op_name, num_outputs, inputs, attrs, ctx, name=None):
123
+ """Monkey-patch to execute to enable execution callbacks."""
124
+ tensors = quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
125
+ for callback in ctx.op_callbacks:
126
+ callback(op_name, tuple(inputs), attrs, tensors, name)
127
+
128
+ return tensors
129
+
130
+
131
+ execute = quick_execute
132
+
133
+
134
+ def must_record_gradient():
135
+ """Import backprop if you want gradients recorded."""
136
+ return False
137
+
138
+
139
+ def record_gradient(unused_op_name, unused_inputs, unused_attrs,
140
+ unused_outputs):
141
+ """Import backprop if you want gradients recorded."""
142
+ pass
143
+
144
+
145
+ def make_float(v, arg_name):
146
+ if not isinstance(v, compat.real_types):
147
+ raise TypeError("Expected float for argument '%s' not %s." %
148
+ (arg_name, repr(v)))
149
+ return float(v)
150
+
151
+
152
+ def make_int(v, arg_name):
153
+ if isinstance(v, str):
154
+ raise TypeError("Expected int for argument '%s' not %s." %
155
+ (arg_name, repr(v)))
156
+ try:
157
+ return int(v)
158
+ except (ValueError, TypeError):
159
+ raise TypeError("Expected int for argument '%s' not %s." %
160
+ (arg_name, repr(v)))
161
+
162
+
163
+ def make_str(v, arg_name):
164
+ if not isinstance(v, compat.bytes_or_text_types):
165
+ raise TypeError("Expected string for argument '%s' not %s." %
166
+ (arg_name, repr(v)))
167
+ return compat.as_bytes(v) # Convert unicode strings to bytes.
168
+
169
+
170
+ def make_bool(v, arg_name):
171
+ if not isinstance(v, bool):
172
+ raise TypeError("Expected bool for argument '%s' not %s." %
173
+ (arg_name, repr(v)))
174
+ return v
175
+
176
+
177
+ def make_type(v, arg_name):
178
+ try:
179
+ v = dtypes.as_dtype(v).base_dtype
180
+ except TypeError:
181
+ raise TypeError("Expected DataType for argument '%s' not %s." %
182
+ (arg_name, repr(v)))
183
+ i = v.as_datatype_enum
184
+ return i
185
+
186
+
187
+ def make_shape(v, arg_name):
188
+ """Convert v into a list."""
189
+ # Args:
190
+ # v: A TensorShapeProto, a list of ints, or a tensor_shape.TensorShape.
191
+ # arg_name: String, for error messages.
192
+
193
+ # Returns:
194
+ # None if the rank is unknown, otherwise a list of ints (or Nones in the
195
+ # position where the dimension is unknown).
196
+ try:
197
+ shape = tensor_shape.as_shape(v)
198
+ except TypeError as e:
199
+ raise TypeError("Error converting %s to a TensorShape: %s." % (arg_name, e))
200
+ except ValueError as e:
201
+ raise ValueError("Error converting %s to a TensorShape: %s." %
202
+ (arg_name, e))
203
+ if shape.ndims is None:
204
+ return None
205
+ else:
206
+ return shape.as_list()
207
+
208
+
209
+ def make_tensor(v, arg_name):
210
+ """Ensure v is a TensorProto."""
211
+ if isinstance(v, tensor_pb2.TensorProto):
212
+ return v
213
+ elif isinstance(v, str):
214
+ pb = tensor_pb2.TensorProto()
215
+ text_format.Merge(v, pb)
216
+ return pb
217
+ raise TypeError(
218
+ "Don't know how to convert %s to a TensorProto for argument '%s'." %
219
+ (repr(v), arg_name))
220
+
221
+
222
+ def args_to_matching_eager(l, ctx, allowed_dtypes, default_dtype=None):
223
+ """Convert sequence `l` to eager same-type Tensors."""
224
+ del ctx # Unused
225
+ if (not l) and (default_dtype is not None):
226
+ return default_dtype, [] # List is empty; assume default dtype.
227
+ for x in l:
228
+ if not isinstance(x, core_types.Value):
229
+ break
230
+ else: # note: intentional for-else
231
+ return l[0]._datatype_enum(), l # pylint: disable=protected-access
232
+
233
+ # Is some input already a Tensor with a dtype?
234
+ dtype = None
235
+ for t in l:
236
+ if isinstance(t, core_types.Value):
237
+ dtype = t.dtype
238
+ break
239
+
240
+ if dtype is None:
241
+ # Infer a dtype based on the first value, and use that dtype for the
242
+ # remaining values.
243
+
244
+ ret = []
245
+ for t in l:
246
+ tensor = None
247
+ # First see if we can get a valid dtype with the default conversion
248
+ # and see if it matches an allowed dtypes. Some ops like ConcatV2 may
249
+ # not list allowed dtypes, in which case we should skip this.
250
+ if dtype is None and allowed_dtypes:
251
+ tensor = tensor_conversion_registry.convert(t)
252
+ # If we did not match an allowed dtype, try again with the default
253
+ # dtype. This could be because we have an empty tensor and thus we
254
+ # picked the wrong type.
255
+ if tensor.dtype not in allowed_dtypes:
256
+ tensor = None
257
+
258
+ if tensor is None:
259
+ tensor = tensor_conversion_registry.convert(
260
+ t, dtype, preferred_dtype=default_dtype
261
+ )
262
+
263
+ ret.append(tensor)
264
+ if dtype is None:
265
+ dtype = tensor.dtype
266
+ else:
267
+ ret = [tensor_conversion_registry.convert(t, dtype) for t in l]
268
+
269
+ # TODO(slebedev): consider removing this as it leaks a Keras concept.
270
+ # pylint: disable=protected-access
271
+ keras_symbolic_tensors = [x for x in ret if _is_keras_symbolic_tensor(x)]
272
+ if keras_symbolic_tensors:
273
+ raise core._SymbolicException(
274
+ "Using symbolic output of a Keras layer during eager execution "
275
+ "{}".format(keras_symbolic_tensors))
276
+ # pylint: enable=protected-access
277
+ return dtype.as_datatype_enum, ret
278
+
279
+
280
+ def convert_to_mixed_eager_tensors(values, ctx):
281
+ del ctx # Unused
282
+ v = [tensor_conversion_registry.convert(t) for t in values]
283
+ types = [t._datatype_enum() for t in v] # pylint: disable=protected-access
284
+ return types, v
285
+
286
+
287
+ def args_to_mixed_eager_tensors(lists, ctx):
288
+ """Converts a list of same-length lists of values to eager tensors."""
289
+ del ctx # Unused
290
+ assert len(lists) > 1
291
+
292
+ # Generate an error if len(lists[i]) is not the same for all i.
293
+ lists_ret = [[]]
294
+ for l in lists[1:]:
295
+ if len(l) != len(lists[0]):
296
+ raise ValueError(
297
+ "Expected list arguments to be the same length: %d != %d (%r vs. %r)."
298
+ % (len(lists[0]), len(l), lists[0], l))
299
+ lists_ret.append([])
300
+
301
+ # Convert the first element of each list first, then the second element, etc.
302
+ types = []
303
+ for i in range(len(lists[0])):
304
+ dtype = None
305
+ # If any list has a Tensor, use that dtype
306
+ for l in lists:
307
+ if isinstance(l[i], core_types.Value):
308
+ dtype = l[i].dtype
309
+ break
310
+ if dtype is None:
311
+ # Convert the first one and use its dtype.
312
+ lists_ret[0].append(tensor_conversion_registry.convert(lists[0][i]))
313
+ dtype = lists_ret[0][i].dtype
314
+ for j in range(1, len(lists)):
315
+ lists_ret[j].append(
316
+ tensor_conversion_registry.convert(lists[j][i], dtype=dtype)
317
+ )
318
+ else:
319
+ # Convert everything to the found dtype.
320
+ for j in range(len(lists)):
321
+ lists_ret[j].append(
322
+ tensor_conversion_registry.convert(lists[j][i], dtype=dtype)
323
+ )
324
+ types.append(dtype.as_datatype_enum)
325
+ return types, lists_ret
326
+
327
+
328
+ def _is_keras_symbolic_tensor(x):
329
+ return hasattr(x, "graph") and getattr(x.graph, "name", None) == "keras_graph"
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/executor.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Executor for eager execution."""
16
+
17
+ from tensorflow.python import pywrap_tfe
18
+
19
+
20
+ class Executor(object):
21
+ """A class for handling eager execution.
22
+
23
+ The default behavior for asynchronous execution is to serialize all ops on
24
+ a single thread. Having different `Executor` objects in different threads
25
+ enables executing ops asynchronously in parallel:
26
+
27
+ ```python
28
+ def thread_function():
29
+ executor = executor.Executor(enable_async=True):
30
+ context.set_executor(executor)
31
+
32
+ a = threading.Thread(target=thread_function)
33
+ a.start()
34
+ b = threading.Thread(target=thread_function)
35
+ b.start()
36
+ ```
37
+ """
38
+
39
+ __slots__ = ["_handle"]
40
+
41
+ def __init__(self, handle):
42
+ self._handle = handle
43
+
44
+ def __del__(self):
45
+ try:
46
+ self.wait()
47
+ pywrap_tfe.TFE_DeleteExecutor(self._handle)
48
+ except TypeError:
49
+ # Suppress some exceptions, mainly for the case when we're running on
50
+ # module deletion. Things that can go wrong include the pywrap module
51
+ # already being unloaded, self._handle. no longer being
52
+ # valid, and so on. Printing warnings in these cases is silly
53
+ # (exceptions raised from __del__ are printed as warnings to stderr).
54
+ pass # 'NoneType' object is not callable when the handle has been
55
+ # partially unloaded.
56
+
57
+ def is_async(self):
58
+ return pywrap_tfe.TFE_ExecutorIsAsync(self._handle)
59
+
60
+ def handle(self):
61
+ return self._handle
62
+
63
+ def wait(self):
64
+ """Waits for ops dispatched in this executor to finish."""
65
+ pywrap_tfe.TFE_ExecutorWaitForAllPendingNodes(self._handle)
66
+
67
+ def clear_error(self):
68
+ """Clears errors raised in this executor during execution."""
69
+ pywrap_tfe.TFE_ExecutorClearError(self._handle)
70
+
71
+
72
+ def new_executor(enable_async,
73
+ enable_streaming_enqueue=True,
74
+ in_flight_nodes_limit=0):
75
+ handle = pywrap_tfe.TFE_NewExecutor(enable_async, enable_streaming_enqueue,
76
+ in_flight_nodes_limit)
77
+ return Executor(handle)
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/forwardprop.py ADDED
@@ -0,0 +1,487 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Utilities for forward-mode automatic differentiation."""
16
+
17
+ import functools
18
+ import threading
19
+
20
+ from tensorflow.core.function.polymorphism import function_cache
21
+ from tensorflow.python import pywrap_tfe
22
+ from tensorflow.python.eager import backprop
23
+ from tensorflow.python.eager import backprop_util
24
+ from tensorflow.python.eager import execute
25
+ from tensorflow.python.eager import forwardprop_util
26
+ from tensorflow.python.eager.polymorphic_function import tracing_compilation
27
+ from tensorflow.python.framework import ops
28
+ from tensorflow.python.framework import tensor_shape
29
+ from tensorflow.python.ops import array_ops
30
+ from tensorflow.python.ops.parallel_for import control_flow_ops
31
+ from tensorflow.python.ops.unconnected_gradients import UnconnectedGradients
32
+ from tensorflow.python.platform import tf_logging as logging
33
+ from tensorflow.python.util import nest
34
+ from tensorflow.python.util.tf_export import tf_export
35
+
36
+
37
+ # Dictionary mapping from op names to special-cased jvp functions. Otherwise
38
+ # backward functions are transposed on the tape.
39
+ _SPECIAL_CASES = {}
40
+
41
+
42
+ def _identity_jvp(attr_tuple, inputs, outputs, tangents):
43
+ # Special-cased mostly for resource handles, where creating ones Tensors from
44
+ # handle data for transposing the backward function on the tape is error-prone
45
+ # (even if we get good handle data, partially defined shapes are an issue).
46
+ del attr_tuple, inputs, outputs
47
+ return [array_ops.identity(t) for t in tangents]
48
+
49
+
50
+ _SPECIAL_CASES["Identity"] = _identity_jvp
51
+
52
+
53
+ def _read_variable_jvp(attr_tuple, inputs, outputs, tangents):
54
+ # Like for Identity, this special case means we don't need to create
55
+ # variable-shaped Tensors from resource handles.
56
+ del attr_tuple, inputs, outputs
57
+ return [array_ops.identity(t) for t in tangents]
58
+
59
+
60
+ _SPECIAL_CASES["ReadVariableOp"] = _read_variable_jvp
61
+
62
+
63
+ _TRACE_COUNT_CONSISTENCY_LOCK = threading.Lock()
64
+ # Map from op names to number of traces of _jvp_helper. Used to cap the number
65
+ # of traces due to shape differences while still specializing where possible.
66
+ _TRACE_COUNT = {}
67
+
68
+
69
+ def _jvp_helper(op_name, attr_tuple, inputs, outputs, tangents):
70
+ """Computes a Jacobian-vector product for an op.
71
+
72
+ Note that this function would be wasteful if executed eagerly. It runs the
73
+ backward gradient function and throws away the result just to record its
74
+ operations on a GradientTape. These unused ops are pruned away when this
75
+ function is traced.
76
+
77
+ Args:
78
+ op_name: A string, the type of operation being executed.
79
+ attr_tuple: Attributes of the operation.
80
+ inputs: A flat list of input Tensors to the operation.
81
+ outputs: A flat list of output Tensors from the operation.
82
+ tangents: A flat list of Tensors, same shape as `inputs`.
83
+
84
+ Returns:
85
+ A flat list of tangents corresponding to `outputs`.
86
+ """
87
+ with _TRACE_COUNT_CONSISTENCY_LOCK:
88
+ # Just make sure writes don't clobber each other's increments; reads in
89
+ # _jvp_dispatch do not lock.
90
+ _TRACE_COUNT[op_name] = _TRACE_COUNT.get(op_name, 0) + 1
91
+
92
+ special_case = _SPECIAL_CASES.get(op_name, None)
93
+ if special_case is not None:
94
+ return special_case(attr_tuple, inputs, outputs, tangents)
95
+ if not outputs:
96
+ # tape.gradients([], inputs) doesn't make much sense
97
+ return []
98
+ # Generally inner GradientTapes won't function while outer accumulators are
99
+ # recording. We temporarily reset forwardprop state to allow GradientTapes to
100
+ # function here.
101
+ with forwardprop_util.push_forwardprop_state():
102
+ trainable_inputs = []
103
+ trainable_indices = []
104
+ nontrivial_tangents = []
105
+ for input_index, tensor in enumerate(inputs):
106
+ if backprop_util.IsTrainable(tensor):
107
+ trainable_inputs.append(tensor)
108
+ trainable_indices.append(input_index)
109
+ nontrivial_tangents.append(tangents[input_index])
110
+
111
+ with backprop.GradientTape() as transpose_tape:
112
+ with backprop.GradientTape() as backfunc_tape:
113
+ backfunc_tape.watch(trainable_inputs)
114
+ execute.record_gradient(op_name, inputs, attr_tuple, outputs)
115
+
116
+ forwardprop_aids = []
117
+ trainable_outputs = []
118
+ nontrivial_output_indices = []
119
+ for output_index, output in enumerate(outputs):
120
+ if backprop_util.IsTrainable(output):
121
+ forwardprop_aids.append(
122
+ array_ops.ones_like(output, name="unused_forwardprop_aid"))
123
+ trainable_outputs.append(output)
124
+ nontrivial_output_indices.append(output_index)
125
+
126
+ transpose_tape.watch(forwardprop_aids)
127
+ grads = backfunc_tape.gradient(
128
+ trainable_outputs,
129
+ trainable_inputs,
130
+ forwardprop_aids,
131
+ unconnected_gradients=UnconnectedGradients.ZERO)
132
+ nontrivial_output_tangents = transpose_tape.gradient(
133
+ grads, forwardprop_aids, output_gradients=nontrivial_tangents)
134
+ output_tangents = [None] * len(outputs)
135
+ for index, tangent in zip(nontrivial_output_indices,
136
+ nontrivial_output_tangents):
137
+ output_tangents[index] = tangent
138
+ return output_tangents
139
+
140
+
141
+ def _jvp_helper_wrapper(op_name, attr_tuple, inputs, outputs, tangents,
142
+ use_batch):
143
+ """Computes a batch of Jacobian-vector product for an op.
144
+
145
+ Args:
146
+ op_name: A string, the type of operation being executed.
147
+ attr_tuple: Attributes of the operation.
148
+ inputs: A flat list of input Tensors to the operation.
149
+ outputs: A flat list of output Tensors from the operation.
150
+ tangents: A flat list of Tensors, compatible with shape `[None] +
151
+ input_shape`.
152
+ use_batch: A bool, True to vetorize over batch of tangents of shape `[None]
153
+ + input_shape`.
154
+
155
+ Returns:
156
+ A flat list of tangents compatible with `outputs`
157
+ or `[None] + output_shape`.
158
+
159
+ Raises:
160
+ ValueError: if tangent shapes are not compatible with input shapes.
161
+ """
162
+ if use_batch:
163
+ for primal, tangent in zip(inputs, tangents):
164
+ if not tangent.shape.is_compatible_with([None] + primal.shape):
165
+ raise ValueError("Tangent {} was expected to be of shape "
166
+ "{} but is instead of shape {}".format(
167
+ tangent, [None] + primal.shape, tangent.shape))
168
+
169
+ return control_flow_ops.vectorized_map(
170
+ functools.partial(_jvp_helper, op_name, attr_tuple, inputs, outputs),
171
+ tangents,
172
+ )
173
+ return _jvp_helper(op_name, attr_tuple, inputs, outputs, tangents)
174
+
175
+
176
+ # TODO(allenl): reduce_retracing for gradients which rely on static
177
+ # shape information are underspecialized. We may want hand-written forward
178
+ # implementations, or a more satisfying story about how we re-specialize
179
+ # gradients which were traced with relaxed shapes (e.g. use conds instead of
180
+ # trace-time Python logic).
181
+ #
182
+ # Using function.defun rather than def_function.function avoids
183
+ # tf.config.run_functions_eagerly(True). `_jvp_helper` doesn't successfully run
184
+ # eagerly (infinite recursion), and even if it did it would use extra memory and
185
+ # run unnecessary computation. The function does not create variables, so the
186
+ # two symbols are otherwise equivalent.
187
+ _jvp_function_cache = function_cache.FunctionCache()
188
+ _jvp_relaxed_config = tracing_compilation.TracingOptions(
189
+ _jvp_helper_wrapper,
190
+ name="_jvp_relaxed_shapes",
191
+ reduce_retracing=True,
192
+ function_cache=_jvp_function_cache,
193
+ )
194
+
195
+ _jvp_exact_config = tracing_compilation.TracingOptions(
196
+ _jvp_helper_wrapper,
197
+ name="_jvp_exact_shapes",
198
+ reduce_retracing=False,
199
+ function_cache=_jvp_function_cache,
200
+ )
201
+
202
+ # The maximum number of exact-shape traces to perform for a single op before
203
+ # switching to shape relaxation.
204
+ _TRACE_COUNT_LIMIT = 32
205
+
206
+
207
+ def _jvp_dispatch(op_name,
208
+ attr_tuple,
209
+ inputs,
210
+ outputs,
211
+ tangents,
212
+ use_batch=False):
213
+ """Determine which forwardprop function to call."""
214
+ # Note that this _TRACE_COUNT read races with writes. That's fine, it just
215
+ # means we may trace a few more exact shapes before moving on to relaxation.
216
+ if _TRACE_COUNT.get(op_name, 0) < _TRACE_COUNT_LIMIT:
217
+ config = _jvp_exact_config
218
+ else:
219
+ config = _jvp_relaxed_config
220
+ return tracing_compilation.call_function(
221
+ (op_name, attr_tuple, inputs, outputs, tangents, use_batch),
222
+ tracing_options=config,
223
+ )
224
+
225
+
226
+ pywrap_tfe.TFE_Py_RegisterJVPFunction(_jvp_dispatch)
227
+
228
+
229
+ @tf_export("autodiff.ForwardAccumulator", v1=[])
230
+ class ForwardAccumulator():
231
+ """Computes Jacobian-vector products ("JVP"s) using forward-mode autodiff.
232
+
233
+ Compare to `tf.GradientTape` which computes vector-Jacobian products ("VJP"s)
234
+ using reverse-mode autodiff (backprop). Reverse mode is more attractive when
235
+ computing gradients of a scalar-valued function with respect to many inputs
236
+ (e.g. a neural network with many parameters and a scalar loss). Forward mode
237
+ works best on functions with many outputs and few inputs. Since it does not
238
+ hold on to intermediate activations, it is much more memory efficient than
239
+ backprop where it is applicable.
240
+
241
+ Consider a simple linear regression:
242
+
243
+ >>> x = tf.constant([[2.0, 3.0], [1.0, 4.0]])
244
+ >>> targets = tf.constant([[1.], [-1.]])
245
+ >>> dense = tf.keras.layers.Dense(1)
246
+ >>> dense.build([None, 2])
247
+ >>> with tf.autodiff.ForwardAccumulator(
248
+ ... primals=dense.kernel,
249
+ ... tangents=tf.constant([[1.], [0.]])) as acc:
250
+ ... loss = tf.reduce_sum((dense(x) - targets) ** 2.)
251
+ >>> acc.jvp(loss)
252
+ <tf.Tensor: shape=(), dtype=float32, numpy=...>
253
+
254
+ The example has two variables containing parameters, `dense.kernel` (2
255
+ parameters) and `dense.bias` (1 parameter). Considering the training data `x`
256
+ as a constant, this means the Jacobian matrix for the function mapping from
257
+ parameters to loss has one row and three columns.
258
+
259
+ With forwardprop, we specify a length-three vector in advance which multiplies
260
+ the Jacobian. The `primals` constructor argument is the parameter (a
261
+ `tf.Tensor` or `tf.Variable`) we're specifying a vector for, and the
262
+ `tangents` argument is the "vector" in Jacobian-vector product. If our goal is
263
+ to compute the entire Jacobian matrix, forwardprop computes one column at a
264
+ time while backprop computes one row at a time. Since the Jacobian in the
265
+ linear regression example has only one row, backprop requires fewer
266
+ invocations:
267
+
268
+ >>> x = tf.constant([[2.0, 3.0], [1.0, 4.0]])
269
+ >>> targets = tf.constant([[1.], [-1.]])
270
+ >>> dense = tf.keras.layers.Dense(1)
271
+ >>> dense.build([None, 2])
272
+ >>> loss_fn = lambda: tf.reduce_sum((dense(x) - targets) ** 2.)
273
+ >>> kernel_fprop = []
274
+ >>> with tf.autodiff.ForwardAccumulator(
275
+ ... dense.kernel, tf.constant([[1.], [0.]])) as acc:
276
+ ... kernel_fprop.append(acc.jvp(loss_fn()))
277
+ >>> with tf.autodiff.ForwardAccumulator(
278
+ ... dense.kernel, tf.constant([[0.], [1.]])) as acc:
279
+ ... kernel_fprop.append(acc.jvp(loss_fn()))
280
+ >>> with tf.autodiff.ForwardAccumulator(dense.bias, tf.constant([1.])) as acc:
281
+ ... bias_fprop = acc.jvp(loss_fn())
282
+ >>> with tf.GradientTape() as tape:
283
+ ... loss = loss_fn()
284
+ >>> kernel_grad, bias_grad = tape.gradient(loss, (dense.kernel, dense.bias))
285
+ >>> np.testing.assert_allclose(
286
+ ... kernel_grad, tf.stack(kernel_fprop)[:, tf.newaxis])
287
+ >>> np.testing.assert_allclose(bias_grad, bias_fprop[tf.newaxis])
288
+
289
+ Implicit in the `tape.gradient` call is a length-one vector which
290
+ left-multiplies the Jacobian, a vector-Jacobian product.
291
+
292
+ `ForwardAccumulator` maintains JVPs corresponding primal tensors it is
293
+ watching, derived from the original `primals` specified in the constructor. As
294
+ soon as a primal tensor is deleted, `ForwardAccumulator` deletes the
295
+ corresponding JVP.
296
+
297
+ `acc.jvp(x)` retrieves `acc`'s JVP corresponding to the primal tensor `x`. It
298
+ does not perform any computation. `acc.jvp` calls can be repeated as long as
299
+ `acc` is accessible, whether the context manager is active or not. New JVPs
300
+ are only computed while the context manager is active.
301
+
302
+ Note that `ForwardAccumulator`s are always applied in the order their context
303
+ managers were entered, so inner accumulators will not see JVP computation from
304
+ outer accumulators. Take higher-order JVPs from outer accumulators:
305
+
306
+ >>> primal = tf.constant(1.1)
307
+ >>> with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as outer:
308
+ ... with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as inner:
309
+ ... primal_out = primal ** tf.constant(3.5)
310
+ >>> inner_jvp = inner.jvp(primal_out)
311
+ >>> inner_jvp # 3.5 * 1.1 ** 2.5
312
+ <tf.Tensor: shape=(), dtype=float32, numpy=4.4417057>
313
+ >>> outer.jvp(inner_jvp) # 3.5 * 2.5 * 1.1 ** 1.5
314
+ <tf.Tensor: shape=(), dtype=float32, numpy=10.094786>
315
+
316
+ Reversing the collection in the last line to instead retrieve
317
+ `inner.jvp(outer.jvp(primal_out))` will not work.
318
+
319
+ Strict nesting also applies to combinations of `ForwardAccumulator` and
320
+ `tf.GradientTape`. More deeply nested `GradientTape` objects will ignore the
321
+ products of outer `ForwardAccumulator` objects. This allows (for example)
322
+ memory-efficient forward-over-backward computation of Hessian-vector products,
323
+ where the inner `GradientTape` would otherwise hold on to all intermediate
324
+ JVPs:
325
+
326
+ >>> v = tf.Variable([1., 2.])
327
+ >>> with tf.autodiff.ForwardAccumulator(
328
+ ... v,
329
+ ... # The "vector" in Hessian-vector product.
330
+ ... tf.constant([1., 0.])) as acc:
331
+ ... with tf.GradientTape() as tape:
332
+ ... y = tf.reduce_sum(v ** 3.)
333
+ ... backward = tape.gradient(y, v)
334
+ >>> backward # gradient from backprop
335
+ <tf.Tensor: shape=(2,), dtype=float32, numpy=array([ 3., 12.], dtype=float32)>
336
+ >>> acc.jvp(backward) # forward-over-backward Hessian-vector product
337
+ <tf.Tensor: shape=(2,), dtype=float32, numpy=array([6., 0.], dtype=float32)>
338
+ """
339
+
340
+ def __init__(self, primals, tangents):
341
+ """Specify tensors to watch and their Jacobian-vector products.
342
+
343
+ Mathematically, `tangents` is a vector right-multiplying the Jacobian matrix
344
+ (a Jacobian-vector product) for the function computed while this accumulator
345
+ is active. Since JVPs are computed in forward mode as the computation
346
+ happens, this vector must be supplied in advance.
347
+
348
+ Listing a single tensor multiple times in `primals` raises an
349
+ exception. Excluding a tensor from `primals` is equivalent to watching it
350
+ with a tangent tensor of zeros.
351
+
352
+ Args:
353
+ primals: A tensor or nested structure of tensors to watch.
354
+ tangents: A tensor or nested structure of tensors, with the same nesting
355
+ structure as `primals`, with each element being a vector with the same
356
+ size as the corresponding primal element.
357
+
358
+ Raises:
359
+ ValueError: If the same tensor or variable is specified multiple times in
360
+ `primals`.
361
+ """
362
+ self._accumulator = pywrap_tfe.TFE_Py_ForwardAccumulatorNew(False)
363
+ self._recording = False
364
+ primal_ids = set()
365
+ for primal in nest.flatten(primals):
366
+ if id(primal) in primal_ids:
367
+ raise ValueError(
368
+ "Tensor {} was specified as a primal multiple times. This may "
369
+ "indicate an error. If it was intended, please sum the "
370
+ "corresponding tangents.")
371
+ primal_ids.add(id(primal))
372
+ self._watch(primals, tangents)
373
+
374
+ def __enter__(self):
375
+ self._push_accumulator()
376
+ return self
377
+
378
+ def __exit__(self, typ, value, traceback):
379
+ if self._recording:
380
+ self._pop_accumulator()
381
+
382
+ def _push_accumulator(self):
383
+ if self._recording:
384
+ raise ValueError("Accumulator is already recording.")
385
+ pywrap_tfe.TFE_Py_ForwardAccumulatorSetAdd(self._accumulator)
386
+ self._recording = True
387
+
388
+ def _pop_accumulator(self):
389
+ if not self._recording:
390
+ raise ValueError("Accumulator is not recording.")
391
+ pywrap_tfe.TFE_Py_ForwardAccumulatorSetRemove(self._accumulator)
392
+ self._recording = False
393
+
394
+ def _watch(self, primals, tangents):
395
+ """Ensures that `primals` are being traced by this accumulator.
396
+
397
+ Mathematically, `tangents` is a vector right-multiplying the Jacobian matrix
398
+ (a Jacobian-vector product) for the function computed while this accumulator
399
+ is active. Since JVPs are computed in forward mode as the computation
400
+ happens, this vector must be supplied in advance.
401
+
402
+ Watching a single tensor multiple times sums each of its `tangents`. Any
403
+ un-watched tensor has zeros for its tangent vector.
404
+
405
+ Args:
406
+ primals: A Tensor or list of Tensors.
407
+ tangents: A Tensor or list of Tensors matching `primals`.
408
+ """
409
+
410
+ def _watch(primal, tangent):
411
+ if not primal.dtype.is_floating:
412
+ logging.log_first_n(
413
+ logging.WARN, "The dtype of the watched primal must be "
414
+ "floating (e.g. tf.float32), got %r", 5, primal.dtype)
415
+ tangent = ops.convert_to_tensor(tangent, dtype=primal.dtype)
416
+ if hasattr(primal, "handle"):
417
+ # Run convert_to_tensor to get the captured handle from whichever
418
+ # function we're running if necessary.
419
+ primal = ops.convert_to_tensor(primal.handle)
420
+ pywrap_tfe.TFE_Py_ForwardAccumulatorWatch(self._accumulator, primal,
421
+ tangent)
422
+
423
+ nest.map_structure(_watch, primals, tangents)
424
+
425
+ def jvp(self, primals, unconnected_gradients=UnconnectedGradients.NONE):
426
+ """Fetches the Jacobian-vector product computed for `primals`.
427
+
428
+ Note that this method performs no computation, and simply looks up a JVP
429
+ that was already computed (unlike backprop using a `tf.GradientTape`, where
430
+ the computation happens on the call to `tape.gradient`).
431
+
432
+ Args:
433
+ primals: A watched Tensor or structure of Tensors to fetch the JVPs for.
434
+ unconnected_gradients: A value which can either hold 'none' or 'zero' and
435
+ alters the value which will be returned if no JVP was computed for
436
+ `primals`. The possible values and effects are detailed in
437
+ 'tf.UnconnectedGradients' and it defaults to 'none'.
438
+
439
+ Returns:
440
+ Tensors with the same shapes and dtypes as `primals`, or None if no JVP
441
+ is available.
442
+ """
443
+ unconnected_gradients = UnconnectedGradients(unconnected_gradients)
444
+ if self._accumulator is None:
445
+ raise ValueError("Called jvp() without first tracing anything.")
446
+
447
+ def _fetch_jvp(tensor):
448
+ if hasattr(tensor, "handle"):
449
+ unwrapped_tensor = ops.convert_to_tensor(tensor.handle)
450
+ else:
451
+ unwrapped_tensor = tensor
452
+ result = pywrap_tfe.TFE_Py_ForwardAccumulatorJVP(self._accumulator,
453
+ unwrapped_tensor)
454
+ if result is None and unconnected_gradients == UnconnectedGradients.ZERO:
455
+ result = array_ops.zeros_like(tensor)
456
+ return result
457
+
458
+ return nest.map_structure(_fetch_jvp, primals)
459
+
460
+ @classmethod
461
+ def _batch_accumulator(cls, primals, tangents):
462
+ """Factory constructor to test accumulator on batches of tangents.
463
+
464
+ Args:
465
+ primals: A tensor or nested structure of tensors to watch.
466
+ tangents: A tensor or nested structure of tensors, with the same nesting
467
+ structure as `primals`, with each element being a vector with compatible
468
+ shape `[None] + primal.shape` of the corresponding primal element.
469
+
470
+ Returns:
471
+ A batch accumulator object.
472
+ """
473
+ acc = super(ForwardAccumulator, cls).__new__(cls, primals, tangents)
474
+ acc._recording = False
475
+ acc._accumulator = pywrap_tfe.TFE_Py_ForwardAccumulatorNew(True)
476
+ primal_ids = set()
477
+ for primal, tangent in zip(nest.flatten(primals), nest.flatten(tangents)):
478
+ tangent.shape.assert_is_compatible_with(
479
+ tensor_shape.TensorShape([None]) + primal.shape)
480
+ if id(primal) in primal_ids:
481
+ raise ValueError(
482
+ "Tensor {} was specified as a primal multiple times. This may "
483
+ "indicate an error. If it was intended, please sum the "
484
+ "corresponding tangents.")
485
+ primal_ids.add(id(primal))
486
+ acc._watch(primals, tangents)
487
+ return acc
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/forwardprop_util.py ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Utilities for managing forward accumulators.
16
+
17
+ A separate file from forwardprop.py so that functions can use these utilities.
18
+ """
19
+
20
+ import collections
21
+ import contextlib
22
+
23
+ from tensorflow.python import pywrap_tfe
24
+
25
+
26
+ class TangentInfo(
27
+ collections.namedtuple("TangentInfo", ["indices", "tangents"])):
28
+ """Packed forward accumulator state. The return value of `pack_tangents`."""
29
+
30
+ def __new__(cls, indices=None, tangents=None):
31
+ if indices is None:
32
+ indices = ()
33
+ if tangents is None:
34
+ tangents = []
35
+ return super(TangentInfo, cls).__new__(cls, indices, tangents)
36
+
37
+
38
+ def pack_tangents(tensors):
39
+ """Packs forward accumulator state into a TangentInfo tuple.
40
+
41
+ Args:
42
+ tensors: A flat list of Tensors to pack forward accumulator state for.
43
+
44
+ Returns:
45
+ A tuple of (indices, tangents):
46
+ indices: A sequence of sequences of two-element tuples. Each forward
47
+ accumulator is represented as a sequence of tuples with (primal_index,
48
+ jvp_index). Both integers index into the concatenated `tensors + jvps`
49
+ array.
50
+ tangents: A flat list of Tensors. Best interpreted as a sequence to be
51
+ appended to `tensors`.
52
+ """
53
+ return TangentInfo(*pywrap_tfe.TFE_Py_PackJVPs(tensors))
54
+
55
+
56
+ @contextlib.contextmanager
57
+ def push_forwardprop_state():
58
+ """Temporarily push or pop transient state for accumulators in the active set.
59
+
60
+ Allows an accumulator which is currently processing an operation to
61
+ temporarily reset its state. This is useful when building forwardprop versions
62
+ of functions, where an accumulator will trigger function building and then
63
+ must process captured symbolic tensors while building it. Without pushing and
64
+ popping, accumulators ignore operations executed as a direct result of their
65
+ own jvp computations.
66
+
67
+ Yields:
68
+ None (used for its side effect).
69
+ """
70
+ try:
71
+ pywrap_tfe.TFE_Py_ForwardAccumulatorPushState()
72
+ yield
73
+ finally:
74
+ pywrap_tfe.TFE_Py_ForwardAccumulatorPopState()
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/function.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Supports old symbols supplied by this file while the code is refactored."""
16
+
17
+ # pylint:disable=unused-import,g-bad-import-order
18
+
19
+ # TODO(b/243822285): Reduce this list as much as possible.
20
+ # Constants
21
+ from tensorflow.python.eager.polymorphic_function.concrete_function import _BACKWARD_PREFIX
22
+ from tensorflow.python.eager.polymorphic_function.concrete_function import _FORWARD_PREFIX
23
+ from tensorflow.python.eager.polymorphic_function.concrete_function import _INFERENCE_PREFIX
24
+
25
+ # Function Classes
26
+ from tensorflow.python.eager.polymorphic_function.concrete_function import ConcreteFunction
27
+ from tensorflow.python.eager.polymorphic_function.atomic_function import from_func_graph
28
+ from tensorflow.python.eager.polymorphic_function.atomic_function import AtomicFunction
29
+
30
+ # Utilities
31
+ from tensorflow.python.eager.polymorphic_function.tf_method_target import TfMethodTarget
32
+ from tensorflow.python.eager.polymorphic_function.concrete_function import _inference_name
33
+
34
+ # TODO(b/244360504): Remove in favor of graph transformation API.
35
+ # QUARANTINED - Function Callback Modification API
36
+ from tensorflow.python.eager.polymorphic_function.transform import FUNC_GRAPH_TRANSFORMS
37
+ from tensorflow.python.eager.polymorphic_function.transform import CONCRETE_FUNCTION_CALLBACKS
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/graph_only_ops.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Graph-only versions of a few op functions, for internal use only."""
16
+
17
+ # Must be separate from array_ops to avoid a cyclic dependency.
18
+
19
+ from tensorflow.core.framework import attr_value_pb2
20
+ from tensorflow.python.framework import op_callbacks
21
+ from tensorflow.python.framework import ops
22
+ from tensorflow.python.framework import tensor_shape
23
+
24
+
25
+ def graph_placeholder(dtype, shape, name=None):
26
+ """Graph-only version of tf.compat.v1.placeholder(), for internal use only."""
27
+ dtype = dtype.base_dtype
28
+ dtype_value = attr_value_pb2.AttrValue(type=dtype.as_datatype_enum)
29
+ if isinstance(shape, (list, tuple)):
30
+ shape = tensor_shape.TensorShape(shape)
31
+ shape = attr_value_pb2.AttrValue(shape=shape.as_proto())
32
+ g = ops.get_default_graph()
33
+ attrs = {"dtype": dtype_value, "shape": shape}
34
+ op = g._create_op_internal( # pylint: disable=protected-access
35
+ "Placeholder", [], [dtype], input_types=[],
36
+ attrs=attrs, name=name)
37
+ result, = op.outputs
38
+ if op_callbacks.should_invoke_op_callbacks():
39
+ # TODO(b/147670703): Once the special-op creation code paths
40
+ # are unified. Remove this `if` block.
41
+ callback_outputs = op_callbacks.invoke_op_callbacks(
42
+ "Placeholder", tuple(), attrs, tuple(op.outputs),
43
+ op_name=name, graph=g)
44
+ if callback_outputs is not None:
45
+ result, = callback_outputs
46
+ return result
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/imperative_grad.py ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Code for backpropagation using the tape utilities."""
16
+
17
+ import collections
18
+
19
+ from tensorflow.python import pywrap_tfe
20
+ from tensorflow.python.ops.unconnected_gradients import UnconnectedGradients
21
+ from tensorflow.python.util import compat
22
+
23
+ VSpace = collections.namedtuple("VSpace", [
24
+ "aggregate_fn", "num_elements_fn", "zeros_fn", "ones_fn",
25
+ "zeros_like_fn", "ones_like_fn", "graph_shape_fn"
26
+ ])
27
+
28
+
29
+ def imperative_grad(tape,
30
+ target,
31
+ sources,
32
+ output_gradients=None,
33
+ sources_raw=None,
34
+ unconnected_gradients=UnconnectedGradients.NONE):
35
+ """Computes gradients from the imperatively defined tape on top of the stack.
36
+
37
+ Works by filtering the tape, computing how many downstream usages are of each
38
+ tensor and entry, and repeatedly applying backward functions until we have
39
+ gradients for all sources.
40
+
41
+ Args:
42
+ tape: the gradient tape which stores the trace.
43
+ target: either a Tensor or list of Tensors to be differentiated.
44
+ sources: list of Tensors for which we want gradients
45
+ output_gradients: if not None, a list of gradient provided for each Target,
46
+ or None if we are to use the target's computed downstream gradient.
47
+ sources_raw: if not None, a list of the source python objects from which the
48
+ sources were generated. Should have the same length as sources. Only needs
49
+ to be populated if unconnected_gradients is 'zero'.
50
+ unconnected_gradients: determines the value returned if the target and
51
+ sources are unconnected. When 'none' the value returned is None wheras when
52
+ 'zero' a zero tensor in the same shape as the sources is returned.
53
+
54
+ Returns:
55
+ the gradient wrt each of the sources.
56
+
57
+ Raises:
58
+ ValueError: if the arguments are invalid.
59
+ RuntimeError: if something goes wrong.
60
+ """
61
+ try:
62
+ unconnected_gradients = UnconnectedGradients(unconnected_gradients)
63
+ except ValueError:
64
+ raise ValueError(
65
+ "Unknown value for unconnected_gradients: %r" % unconnected_gradients)
66
+
67
+ return pywrap_tfe.TFE_Py_TapeGradient(
68
+ tape._tape, # pylint: disable=protected-access
69
+ target,
70
+ sources,
71
+ output_gradients,
72
+ sources_raw,
73
+ compat.as_str(unconnected_gradients.value))
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/lift_to_graph.py ADDED
@@ -0,0 +1,365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ # pylint: disable=unidiomatic-typecheck
16
+ """Utility to lift subgraphs."""
17
+
18
+ import collections
19
+
20
+ from tensorflow.python.framework import func_graph
21
+ from tensorflow.python.framework import ops
22
+ from tensorflow.python.framework import tensor as tensor_lib
23
+ from tensorflow.python.ops import array_ops
24
+ from tensorflow.python.ops import op_selector
25
+ from tensorflow.python.ops import resource_variable_ops
26
+ from tensorflow.python.util import compat
27
+ from tensorflow.python.util import object_identity
28
+ from tensorflow.python.util.tf_export import tf_export
29
+
30
+
31
+ UnliftableError = op_selector.UnliftableError
32
+
33
+
34
+ def _as_operation(op_or_tensor):
35
+ if isinstance(op_or_tensor, tensor_lib.Tensor):
36
+ return op_or_tensor.op
37
+ return op_or_tensor
38
+
39
+
40
+ def _constant_inputs(op_or_tensor):
41
+ return all(_as_operation(i).type == u"Const"
42
+ and not _as_operation(i).control_inputs
43
+ for i in op_selector.graph_inputs(_as_operation(op_or_tensor)))
44
+
45
+
46
+ # Represents an input to `copied_op` which must be updated once
47
+ # `old_graph_tensor` has been copied.
48
+ _InputMutation = collections.namedtuple(
49
+ "_InputMutation",
50
+ ["copied_op", "input_index", "old_graph_tensor"])
51
+
52
+
53
+ # Represents a control input to `copied_op` which must be added once
54
+ # `old_graph_op` has been copied.
55
+ _ControlMutation = collections.namedtuple(
56
+ "_ControlMutation",
57
+ ["copied_op", "old_graph_op"])
58
+
59
+
60
+ def _copy_non_source(op, graph, op_map, base_graph):
61
+ """Copy an op directly to a given graph.
62
+
63
+ Generally `op`'s inputs should already have been copied. If this is not the
64
+ case, for example with v1 while_loops, then `_copy_non_source` inserts
65
+ placeholders for the unavailable Tensors and returns a list of required
66
+ mutations.
67
+
68
+ Args:
69
+ op: The op to be copied.
70
+ graph: The destination graph.
71
+ op_map: A dict mapping ops and tensors in the old graph to the new one.
72
+ base_graph: The graph we're copying from, for any necessary functions.
73
+ Returns:
74
+ A tuple of (required_inputs, required_control_inputs):
75
+ required_inputs:
76
+ A list of `_InputMutation` tuples containing inputs to `copied_op` which
77
+ must be updated once `old_graph_tensor` has been copied.
78
+ required_control_inputs:
79
+ A list of `_ControlMutation` tuples containing control inputs to
80
+ `copied_op` which must be added once `old_graph_op` has been copied.
81
+ """
82
+ input_mutations = []
83
+ control_mutations = []
84
+ copied_inputs = []
85
+ for input_index, original_input in enumerate(op.inputs):
86
+ copied_input = op_map.get(original_input, None)
87
+ if copied_input is None:
88
+ # An input for this op is missing due to a loop in the graph. We'll insert
89
+ # a placeholder for now and return information about the required post-hoc
90
+ # mutation.
91
+ copied_input = array_ops.placeholder(
92
+ name="unused_control_flow_input",
93
+ shape=original_input.shape,
94
+ dtype=original_input.dtype)
95
+ input_mutations.append(
96
+ # `copied_op` is filled in below, after we've created it.
97
+ _InputMutation(copied_op=None,
98
+ input_index=input_index,
99
+ old_graph_tensor=original_input))
100
+ copied_inputs.append(copied_input)
101
+
102
+ copied_control_inputs = []
103
+ for original_control_input in op.control_inputs:
104
+ copied_control_input = op_map.get(original_control_input, None)
105
+ if copied_control_input is None:
106
+ control_mutations.append(
107
+ _ControlMutation(copied_op=None,
108
+ old_graph_op=original_control_input))
109
+ else:
110
+ copied_control_inputs.append(copied_control_input)
111
+
112
+ # Don't copy over nodes with _tpu_replicate attribute. This attributed is used
113
+ # to signal that the op was built inside a tpu_replicate context; if we're
114
+ # lifting it to another graph we're similarly lifting it into another context.
115
+ with ops.control_dependencies(copied_control_inputs), ops.device(op.device):
116
+ # pylint: disable=protected-access
117
+ f = base_graph._functions.get(op.type, None)
118
+ if f is not None and compat.as_str(f.name) not in graph._functions:
119
+ f.add_to_graph(graph)
120
+ # pylint: enable=protected-access
121
+
122
+ # Create a new op in the destination graph if it doesn't exist before.
123
+ copied_op = graph.create_op(
124
+ op_type=op.type,
125
+ inputs=copied_inputs,
126
+ dtypes=[x.dtype for x in op.outputs],
127
+ attrs={
128
+ key: value for key, value in op.node_def.attr.items()
129
+ if not key.startswith("_class") and
130
+ not key.startswith("_tpu_replicate")
131
+ }, # b/128981532.
132
+ name=op.name)
133
+ op_map[op] = copied_op
134
+ for i, o in enumerate(op.outputs):
135
+ op_map[o] = copied_op.outputs[i]
136
+
137
+ return ([mutation._replace(copied_op=copied_op)
138
+ for mutation in input_mutations],
139
+ [mutation._replace(copied_op=copied_op)
140
+ for mutation in control_mutations])
141
+
142
+
143
+ def _copy_source(s, graph, op_map, handle_captures, inverse_captures,
144
+ base_graph):
145
+ """Create a source in a graph based on a Tensor from a different graph.
146
+
147
+ This function creates a placeholder analog of `s` in a graph with the
148
+ following behavior:
149
+
150
+ 1) If s is a captured Tensor or Variable and handle_captures is set to True,
151
+ simply capture it in the new graph as well.
152
+
153
+ 2) If s is a PlaceholderWithDefault whose default is a constant, preserve
154
+ said default in the new graph.
155
+
156
+ 3) When applicable, copy resource variable metadata from `s` to the newly
157
+ created placeholder.
158
+
159
+ Args:
160
+ s: The source of interest.
161
+ graph: The destination graph.
162
+ op_map: A dict mapping ops and tensors in the old graph to the new one.
163
+ handle_captures: A boolean indicating whether to re-capture s in the new
164
+ graph or simply create a vanilla placeholder.
165
+ inverse_captures: A dict mapping s back to the Tensor or Variable that it
166
+ captures.
167
+ base_graph: The graph being copied from.
168
+ """
169
+ if handle_captures and s in inverse_captures:
170
+ copied_placeholder = graph.capture(inverse_captures[s], name=s.op.name)
171
+ elif s.op.type == "PlaceholderWithDefault" and _constant_inputs(s):
172
+ # Copy the default value to the graph.
173
+ default_value = s.op.inputs[0]
174
+ unavailable_inputs, unavailable_control_inputs = _copy_non_source(
175
+ op=default_value.op, graph=graph, op_map=op_map,
176
+ base_graph=base_graph)
177
+ if unavailable_inputs or unavailable_control_inputs:
178
+ raise AssertionError(
179
+ "Could not copy source node {} because it has inputs."
180
+ .format(default_value))
181
+
182
+ with ops.device(s.op.device):
183
+ copied_placeholder = array_ops.placeholder_with_default(
184
+ input=op_map[default_value], shape=s.shape, name=s.op.name)
185
+ else:
186
+ with ops.device(s.op.device):
187
+ copied_placeholder = array_ops.placeholder(
188
+ dtype=s.dtype, shape=s.shape, name=s.op.name)
189
+
190
+ base_handle = resource_variable_ops.get_resource_handle_data(s)
191
+ if base_handle.shape_and_type:
192
+ resource_variable_ops._set_handle_shapes_and_types( # pylint: disable=protected-access
193
+ copied_placeholder,
194
+ base_handle,
195
+ graph_mode=True)
196
+
197
+ op_map[s] = copied_placeholder
198
+ # Add an entry for the op of the source tensor so that if there are any nodes
199
+ # depending on that op via control dependencies it can work correctly.
200
+ op_map[s.op] = copied_placeholder.op
201
+
202
+
203
+ @tf_export("__internal__.lift_to_graph", v1=[])
204
+ def lift_to_graph(tensors,
205
+ graph,
206
+ sources=None,
207
+ disallowed_placeholders=None,
208
+ add_sources=False,
209
+ handle_captures=False,
210
+ base_graph=None,
211
+ op_map=None):
212
+ """Copies the tensor and all its inputs recursively to the outer graph.
213
+
214
+ Args:
215
+ tensors: The Tensors to lift.
216
+ graph: The graph to lift to.
217
+ sources: Optional sequence of nodes to start from. If omitted the whole
218
+ subgraph which feeds into `init_tensor` is lifted.
219
+ disallowed_placeholders: An optional set of ops which may not appear in the
220
+ lifted graph. Defaults to all placeholders.
221
+ add_sources: A boolean indicating whether placeholders which are not in
222
+ sources should be allowed.
223
+ handle_captures: A boolean indicating whether to re-capture s in the new
224
+ graph or simply create a vanilla placeholder.
225
+ base_graph: The graph from which to lift ops. This will be inferred if not
226
+ specified.
227
+ op_map: A map contains all the existing nodes that have been lifted to the
228
+ destination graph, so they won't be lifted and copied again.
229
+
230
+ Returns:
231
+ A mapping from ops in the current default graph to ops in `graph`.
232
+
233
+ Raises:
234
+ UnliftableError: If a placeholder blocks lifting.
235
+ """
236
+ variable_init_tensors = []
237
+ init_tensors = []
238
+ for tensor in tensors:
239
+ if isinstance(tensor, resource_variable_ops.ResourceVariable):
240
+ variable_init_tensors.append(tensor)
241
+ else:
242
+ init_tensors.append(tensor)
243
+ base_graph = base_graph or init_tensors[0].graph
244
+ op_map = op_map or object_identity.ObjectIdentityDictionary()
245
+
246
+ # Check that the initializer does not depend on any placeholders.
247
+ sources = object_identity.ObjectIdentitySet(sources or [])
248
+ visited_ops = set(x.op for x in sources)
249
+ op_outputs = collections.defaultdict(set)
250
+
251
+ # First we extract the subgraph between init_tensors and sources.
252
+ for init_tensor in init_tensors:
253
+ sources.update(op_selector.map_subgraph(
254
+ init_tensor=init_tensor,
255
+ sources=sources,
256
+ disallowed_placeholders=disallowed_placeholders,
257
+ visited_ops=visited_ops,
258
+ op_outputs=op_outputs,
259
+ add_sources=add_sources))
260
+
261
+ # Try to topologically sort the nodes we've extracted. Now we know how many of
262
+ # their outputs are part of this subgraph.
263
+ ops_to_copy = []
264
+ marked_ops = set([])
265
+ ops_to_visit = [_as_operation(t) for t in init_tensors
266
+ if not op_outputs[_as_operation(t)]]
267
+ unvisited_ops = set(ops_to_visit)
268
+ while unvisited_ops:
269
+ while ops_to_visit:
270
+ op = ops_to_visit.pop()
271
+ if op in marked_ops:
272
+ continue
273
+ marked_ops.add(op)
274
+ ops_to_copy.append(op)
275
+ for inp in op_selector.graph_inputs(op):
276
+ # Don't lift the TPUReplicateMetadata nodes out of the function, because
277
+ # it has no registered kernels.
278
+ if inp.type == "TPUReplicateMetadata":
279
+ continue
280
+ unvisited_ops.add(inp)
281
+ if (all(x in marked_ops for x in op_outputs[inp]) and
282
+ inp not in sources):
283
+ ops_to_visit.append(inp)
284
+ unvisited_ops.difference_update(marked_ops)
285
+ if unvisited_ops:
286
+ # `unvisited_ops` should only have elements if the graph has a loop. In
287
+ # this case we want to keep copying and there's no topological ordering;
288
+ # we'll do ugly post-hoc mutations instead.
289
+ ops_to_visit.append(next(iter(unvisited_ops)))
290
+
291
+ # When the topological sort fails due to loops, it can result in exceptions
292
+ # later when copying a node which inputs haven't been copied yet. We can
293
+ # improve that pseudo-topological order slightly by putting the ops without
294
+ # inputs, such as constants, at the start of the topological order (i.e at
295
+ # the end of ops_to_copy).
296
+ ops_to_copy.sort(key=(lambda op: len(op_selector.graph_inputs(op)) == 0))
297
+
298
+ # When lifting from one FuncGraph to another, we will need to capture the
299
+ # relevant tensors as well.
300
+ captures = []
301
+ inverse_captures = object_identity.ObjectIdentityDictionary()
302
+ internal_captures = []
303
+ if (isinstance(base_graph, func_graph.FuncGraph) and
304
+ isinstance(graph, func_graph.FuncGraph)):
305
+ captures = base_graph.captures
306
+ for external_capture, internal_capture in captures:
307
+ inverse_captures[internal_capture] = external_capture
308
+ internal_captures = base_graph.internal_captures
309
+
310
+ # ops_to_copy now holds a reverse topologically sorted list of ops which
311
+ # ends in the initializer. We copy those to the outermost graph and
312
+ # build the initialization op there.
313
+ with graph.as_default():
314
+ for i in variable_init_tensors:
315
+ op_map[i] = i
316
+ source_ops = set()
317
+ # Add the sources in the same order as the original graph.
318
+ for s in internal_captures:
319
+ if s in sources:
320
+ sources.remove(s)
321
+ source_ops.add(s.op)
322
+ _copy_source(
323
+ s=s,
324
+ graph=graph,
325
+ op_map=op_map,
326
+ handle_captures=handle_captures,
327
+ inverse_captures=inverse_captures,
328
+ base_graph=base_graph)
329
+ for s in sources:
330
+ source_ops.add(s.op)
331
+ _copy_source(
332
+ s=s,
333
+ graph=graph,
334
+ op_map=op_map,
335
+ handle_captures=handle_captures,
336
+ inverse_captures=inverse_captures,
337
+ base_graph=base_graph)
338
+
339
+ input_mutations = []
340
+ control_mutations = []
341
+ for op in reversed(ops_to_copy):
342
+ if op in source_ops or op in op_map:
343
+ continue
344
+ new_input_mutations, new_control_mutations = _copy_non_source(
345
+ op=op, graph=graph, op_map=op_map, base_graph=base_graph)
346
+ input_mutations.extend(new_input_mutations)
347
+ control_mutations.extend(new_control_mutations)
348
+
349
+ # Mutate the new graph to insert any loops which existed in the source
350
+ # graph due to v1 while_loops.
351
+ #
352
+ # pylint: disable=protected-access
353
+ with graph._mutation_lock():
354
+ for mutation in input_mutations:
355
+ mutation.copied_op._update_input(
356
+ mutation.input_index, op_map[mutation.old_graph_tensor])
357
+ for mutation in control_mutations:
358
+ # Don't lift the TPUReplicateMetadata nodes out of the function, because
359
+ # it has no registered kernels.
360
+ if mutation.old_graph_op.type == "TPUReplicateMetadata":
361
+ continue
362
+ mutation.copied_op._add_control_input(op_map[mutation.old_graph_op])
363
+ # pylint: enable=protected-access
364
+
365
+ return op_map
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/memory_tests/__init__.py ADDED
File without changes
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/memory_tests/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (192 Bytes). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/memory_tests/__pycache__/memory_test_util.cpython-310.pyc ADDED
Binary file (1.55 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/memory_tests/memory_test_util.py ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """Utils for memory tests."""
16
+
17
+ import collections
18
+ import gc
19
+ import time
20
+
21
+ from tensorflow.python.eager import context
22
+
23
+ # memory_profiler might not be available in the OSS version of TensorFlow.
24
+ try:
25
+ import memory_profiler # pylint:disable=g-import-not-at-top
26
+ except ImportError:
27
+ memory_profiler = None
28
+
29
+
30
+ def _instance_count_by_class():
31
+ counter = collections.Counter()
32
+
33
+ for obj in gc.get_objects():
34
+ try:
35
+ counter[obj.__class__.__name__] += 1
36
+ except Exception: # pylint:disable=broad-except
37
+ pass
38
+
39
+ return counter
40
+
41
+
42
+ def assert_no_leak(f, num_iters=100000, increase_threshold_absolute_mb=25):
43
+ """Assert memory usage doesn't increase beyond given threshold for f."""
44
+
45
+ with context.eager_mode():
46
+ # Warm up.
47
+ f()
48
+
49
+ # Wait for background threads to start up and take over memory.
50
+ # FIXME: The nature of this test leaves few other options. Maybe there
51
+ # is a better way to do this.
52
+ time.sleep(4)
53
+
54
+ gc.collect()
55
+ initial = memory_profiler.memory_usage(-1)[0]
56
+ instance_count_by_class_before = _instance_count_by_class()
57
+
58
+ for _ in range(num_iters):
59
+ f()
60
+
61
+ gc.collect()
62
+ increase = memory_profiler.memory_usage(-1)[0] - initial
63
+
64
+ assert increase < increase_threshold_absolute_mb, (
65
+ "Increase is too high. Initial memory usage: %f MB. Increase: %f MB. "
66
+ "Maximum allowed increase: %f MB. "
67
+ "Instance count diff before/after: %s") % (
68
+ initial, increase, increase_threshold_absolute_mb,
69
+ _instance_count_by_class() - instance_count_by_class_before)
70
+
71
+
72
+ def memory_profiler_is_available():
73
+ return memory_profiler is not None
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/monitoring.py ADDED
@@ -0,0 +1,542 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # ==============================================================================
15
+ """TensorFlow monitoring APIs."""
16
+
17
+ import collections
18
+ import functools
19
+ import time
20
+
21
+ from tensorflow.core.framework import summary_pb2
22
+ from tensorflow.python import pywrap_tfe
23
+ from tensorflow.python.client import pywrap_tf_session
24
+ from tensorflow.python.framework import c_api_util
25
+ from tensorflow.python.util import compat
26
+ from tensorflow.python.util.tf_export import tf_export
27
+
28
+ _MetricMethod = collections.namedtuple('MetricMethod', 'create delete get_cell')
29
+ _counter_methods = [
30
+ _MetricMethod(
31
+ create=pywrap_tfe.TFE_MonitoringNewCounter0,
32
+ delete=pywrap_tfe.TFE_MonitoringDeleteCounter0,
33
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellCounter0),
34
+ _MetricMethod(
35
+ create=pywrap_tfe.TFE_MonitoringNewCounter1,
36
+ delete=pywrap_tfe.TFE_MonitoringDeleteCounter1,
37
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellCounter1),
38
+ _MetricMethod(
39
+ create=pywrap_tfe.TFE_MonitoringNewCounter2,
40
+ delete=pywrap_tfe.TFE_MonitoringDeleteCounter2,
41
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellCounter2),
42
+ ]
43
+ _int_gauge_methods = [
44
+ _MetricMethod(
45
+ create=pywrap_tfe.TFE_MonitoringNewIntGauge0,
46
+ delete=pywrap_tfe.TFE_MonitoringDeleteIntGauge0,
47
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellIntGauge0),
48
+ _MetricMethod(
49
+ create=pywrap_tfe.TFE_MonitoringNewIntGauge1,
50
+ delete=pywrap_tfe.TFE_MonitoringDeleteIntGauge1,
51
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellIntGauge1),
52
+ _MetricMethod(
53
+ create=pywrap_tfe.TFE_MonitoringNewIntGauge2,
54
+ delete=pywrap_tfe.TFE_MonitoringDeleteIntGauge2,
55
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellIntGauge2),
56
+ ]
57
+ _string_gauge_methods = [
58
+ _MetricMethod(
59
+ create=pywrap_tfe.TFE_MonitoringNewStringGauge0,
60
+ delete=pywrap_tfe.TFE_MonitoringDeleteStringGauge0,
61
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellStringGauge0),
62
+ _MetricMethod(
63
+ create=pywrap_tfe.TFE_MonitoringNewStringGauge1,
64
+ delete=pywrap_tfe.TFE_MonitoringDeleteStringGauge1,
65
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellStringGauge1),
66
+ _MetricMethod(
67
+ create=pywrap_tfe.TFE_MonitoringNewStringGauge2,
68
+ delete=pywrap_tfe.TFE_MonitoringDeleteStringGauge2,
69
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellStringGauge2),
70
+ _MetricMethod(
71
+ create=pywrap_tfe.TFE_MonitoringNewStringGauge3,
72
+ delete=pywrap_tfe.TFE_MonitoringDeleteStringGauge3,
73
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellStringGauge3),
74
+ _MetricMethod(
75
+ create=pywrap_tfe.TFE_MonitoringNewStringGauge4,
76
+ delete=pywrap_tfe.TFE_MonitoringDeleteStringGauge4,
77
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellStringGauge4),
78
+ ]
79
+ _bool_gauge_methods = [
80
+ _MetricMethod(
81
+ create=pywrap_tfe.TFE_MonitoringNewBoolGauge0,
82
+ delete=pywrap_tfe.TFE_MonitoringDeleteBoolGauge0,
83
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellBoolGauge0),
84
+ _MetricMethod(
85
+ create=pywrap_tfe.TFE_MonitoringNewBoolGauge1,
86
+ delete=pywrap_tfe.TFE_MonitoringDeleteBoolGauge1,
87
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellBoolGauge1),
88
+ _MetricMethod(
89
+ create=pywrap_tfe.TFE_MonitoringNewBoolGauge2,
90
+ delete=pywrap_tfe.TFE_MonitoringDeleteBoolGauge2,
91
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellBoolGauge2),
92
+ ]
93
+ _sampler_methods = [
94
+ _MetricMethod(
95
+ create=pywrap_tfe.TFE_MonitoringNewSampler0,
96
+ delete=pywrap_tfe.TFE_MonitoringDeleteSampler0,
97
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellSampler0),
98
+ _MetricMethod(
99
+ create=pywrap_tfe.TFE_MonitoringNewSampler1,
100
+ delete=pywrap_tfe.TFE_MonitoringDeleteSampler1,
101
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellSampler1),
102
+ _MetricMethod(
103
+ create=pywrap_tfe.TFE_MonitoringNewSampler2,
104
+ delete=pywrap_tfe.TFE_MonitoringDeleteSampler2,
105
+ get_cell=pywrap_tfe.TFE_MonitoringGetCellSampler2),
106
+ ]
107
+
108
+
109
+ class Metric(object):
110
+ """The base class of metric."""
111
+
112
+ __slots__ = ["_metric", "_metric_name", "_metric_methods", "_label_length"]
113
+
114
+ def __init__(self, metric_name, metric_methods, label_length, *args):
115
+ """Creates a new metric.
116
+
117
+ Args:
118
+ metric_name: name of the metric class.
119
+ metric_methods: list of swig metric methods.
120
+ label_length: length of label args.
121
+ *args: the arguments to call create method.
122
+ """
123
+ self._metric_name = metric_name
124
+ self._metric_methods = metric_methods
125
+ self._label_length = label_length
126
+
127
+ if label_length >= len(self._metric_methods):
128
+ raise ValueError('Cannot create {} metric with label >= {}'.format(
129
+ self._metric_name, len(self._metric_methods)))
130
+
131
+ self._metric = self._metric_methods[self._label_length].create(*args)
132
+
133
+ def __del__(self):
134
+ try:
135
+ deleter = self._metric_methods[self._label_length].delete
136
+ metric = self._metric
137
+ except AttributeError:
138
+ return
139
+
140
+ if deleter is not None:
141
+ deleter(metric)
142
+
143
+ def get_cell(self, *labels):
144
+ """Retrieves the cell."""
145
+ if len(labels) != self._label_length:
146
+ raise ValueError('The {} expects taking {} labels'.format(
147
+ self._metric_name, self._label_length))
148
+ return self._metric_methods[self._label_length].get_cell(
149
+ self._metric, *labels)
150
+
151
+
152
+ class CounterCell(object):
153
+ """CounterCell stores each value of a Counter."""
154
+
155
+ __slots__ = ["_cell"]
156
+
157
+ def __init__(self, cell):
158
+ """Creates a new CounterCell.
159
+
160
+ Args:
161
+ cell: A c pointer of TFE_MonitoringCounterCell.
162
+ """
163
+ self._cell = cell
164
+
165
+ def increase_by(self, value):
166
+ """Atomically increments the value.
167
+
168
+ Args:
169
+ value: non-negative value.
170
+ """
171
+ pywrap_tfe.TFE_MonitoringCounterCellIncrementBy(self._cell, value)
172
+
173
+ def value(self):
174
+ """Retrieves the current value."""
175
+ return pywrap_tfe.TFE_MonitoringCounterCellValue(self._cell)
176
+
177
+
178
+ class Counter(Metric):
179
+ """A stateful class for updating a cumulative integer metric.
180
+
181
+ This class encapsulates a set of values (or a single value for a label-less
182
+ metric). Each value is identified by a tuple of labels. The class allows the
183
+ user to increment each value.
184
+ """
185
+
186
+ __slots__ = []
187
+
188
+ def __init__(self, name, description, *labels):
189
+ """Creates a new Counter.
190
+
191
+ Args:
192
+ name: name of the new metric.
193
+ description: description of the new metric.
194
+ *labels: The label list of the new metric.
195
+ """
196
+ super(Counter, self).__init__('Counter', _counter_methods, len(labels),
197
+ name, description, *labels)
198
+
199
+ def get_cell(self, *labels):
200
+ """Retrieves the cell."""
201
+ return CounterCell(super(Counter, self).get_cell(*labels))
202
+
203
+
204
+ class IntGaugeCell(object):
205
+ """A single integer value stored in an `IntGauge`."""
206
+
207
+ __slots__ = ["_cell"]
208
+
209
+ def __init__(self, cell):
210
+ """Creates a new IntGaugeCell.
211
+
212
+ Args:
213
+ cell: A c pointer of TFE_MonitoringIntGaugeCell.
214
+ """
215
+ self._cell = cell
216
+
217
+ def set(self, value):
218
+ """Atomically set the value.
219
+
220
+ Args:
221
+ value: integer value.
222
+ """
223
+ pywrap_tfe.TFE_MonitoringIntGaugeCellSet(self._cell, value)
224
+
225
+ def value(self):
226
+ """Retrieves the current value."""
227
+ return pywrap_tfe.TFE_MonitoringIntGaugeCellValue(self._cell)
228
+
229
+
230
+ class IntGauge(Metric):
231
+ """A stateful class for updating a gauge-like integer metric.
232
+
233
+ This class encapsulates a set of integer values (or a single value for a
234
+ label-less metric). Each value is identified by a tuple of labels. The class
235
+ allows the user to set each value.
236
+ """
237
+
238
+ __slots__ = []
239
+
240
+ def __init__(self, name, description, *labels):
241
+ """Creates a new IntGauge.
242
+
243
+ Args:
244
+ name: name of the new metric.
245
+ description: description of the new metric.
246
+ *labels: The label list of the new metric.
247
+ """
248
+ super(IntGauge, self).__init__('IntGauge', _int_gauge_methods, len(labels),
249
+ name, description, *labels)
250
+
251
+ def get_cell(self, *labels):
252
+ """Retrieves the cell."""
253
+ return IntGaugeCell(super(IntGauge, self).get_cell(*labels))
254
+
255
+
256
+ class StringGaugeCell(object):
257
+ """A single string value stored in an `StringGauge`."""
258
+
259
+ __slots__ = ["_cell"]
260
+
261
+ def __init__(self, cell):
262
+ """Creates a new StringGaugeCell.
263
+
264
+ Args:
265
+ cell: A c pointer of TFE_MonitoringStringGaugeCell.
266
+ """
267
+ self._cell = cell
268
+
269
+ def set(self, value):
270
+ """Atomically set the value.
271
+
272
+ Args:
273
+ value: string value.
274
+ """
275
+ pywrap_tfe.TFE_MonitoringStringGaugeCellSet(self._cell, value)
276
+
277
+ def value(self):
278
+ """Retrieves the current value."""
279
+ with c_api_util.tf_buffer() as buffer_:
280
+ pywrap_tfe.TFE_MonitoringStringGaugeCellValue(self._cell, buffer_)
281
+ value = pywrap_tf_session.TF_GetBuffer(buffer_).decode('utf-8')
282
+ return value
283
+
284
+
285
+ class StringGauge(Metric):
286
+ """A stateful class for updating a gauge-like string metric.
287
+
288
+ This class encapsulates a set of string values (or a single value for a
289
+ label-less metric). Each value is identified by a tuple of labels. The class
290
+ allows the user to set each value.
291
+ """
292
+
293
+ __slots__ = []
294
+
295
+ def __init__(self, name, description, *labels):
296
+ """Creates a new StringGauge.
297
+
298
+ Args:
299
+ name: name of the new metric.
300
+ description: description of the new metric.
301
+ *labels: The label list of the new metric.
302
+ """
303
+ super(StringGauge, self).__init__('StringGauge', _string_gauge_methods,
304
+ len(labels), name, description, *labels)
305
+
306
+ def get_cell(self, *labels):
307
+ """Retrieves the cell."""
308
+ return StringGaugeCell(super(StringGauge, self).get_cell(*labels))
309
+
310
+
311
+ class BoolGaugeCell(object):
312
+ """A single boolean value stored in an `BoolGauge`."""
313
+
314
+ __slots__ = ["_cell"]
315
+
316
+ def __init__(self, cell):
317
+ """Creates a new BoolGaugeCell.
318
+
319
+ Args:
320
+ cell: A c pointer of TFE_MonitoringBoolGaugeCell.
321
+ """
322
+ self._cell = cell
323
+
324
+ def set(self, value):
325
+ """Atomically set the value.
326
+
327
+ Args:
328
+ value: bool value.
329
+ """
330
+ pywrap_tfe.TFE_MonitoringBoolGaugeCellSet(self._cell, value)
331
+
332
+ def value(self):
333
+ """Retrieves the current value."""
334
+ return pywrap_tfe.TFE_MonitoringBoolGaugeCellValue(self._cell)
335
+
336
+
337
+ @tf_export("__internal__.monitoring.BoolGauge", v1=[])
338
+ class BoolGauge(Metric):
339
+ """A stateful class for updating a gauge-like bool metric.
340
+
341
+ This class encapsulates a set of boolean values (or a single value for a
342
+ label-less metric). Each value is identified by a tuple of labels. The class
343
+ allows the user to set each value.
344
+ """
345
+
346
+ __slots__ = []
347
+
348
+ def __init__(self, name, description, *labels):
349
+ """Creates a new BoolGauge.
350
+
351
+ Args:
352
+ name: name of the new metric.
353
+ description: description of the new metric.
354
+ *labels: The label list of the new metric.
355
+ """
356
+ super(BoolGauge, self).__init__('BoolGauge', _bool_gauge_methods,
357
+ len(labels), name, description, *labels)
358
+
359
+ def get_cell(self, *labels):
360
+ """Retrieves the cell."""
361
+ return BoolGaugeCell(super(BoolGauge, self).get_cell(*labels))
362
+
363
+
364
+ class SamplerCell(object):
365
+ """SamplerCell stores each value of a Sampler."""
366
+
367
+ __slots__ = ["_cell"]
368
+
369
+ def __init__(self, cell):
370
+ """Creates a new SamplerCell.
371
+
372
+ Args:
373
+ cell: A c pointer of TFE_MonitoringSamplerCell.
374
+ """
375
+ self._cell = cell
376
+
377
+ def add(self, value):
378
+ """Atomically add a sample.
379
+
380
+ Args:
381
+ value: float value.
382
+ """
383
+ pywrap_tfe.TFE_MonitoringSamplerCellAdd(self._cell, value)
384
+
385
+ def value(self):
386
+ """Retrieves the current distribution of samples.
387
+
388
+ Returns:
389
+ A HistogramProto describing the distribution of samples.
390
+ """
391
+ with c_api_util.tf_buffer() as buffer_:
392
+ pywrap_tfe.TFE_MonitoringSamplerCellValue(self._cell, buffer_)
393
+ proto_data = pywrap_tf_session.TF_GetBuffer(buffer_)
394
+ histogram_proto = summary_pb2.HistogramProto()
395
+ histogram_proto.ParseFromString(compat.as_bytes(proto_data))
396
+ return histogram_proto
397
+
398
+
399
+ class Buckets(object):
400
+ """Bucketing strategies for the samplers."""
401
+
402
+ __slots__ = ["buckets"]
403
+
404
+ def __init__(self, buckets):
405
+ """Creates a new Buckets.
406
+
407
+ Args:
408
+ buckets: A c pointer of TFE_MonitoringBuckets.
409
+ """
410
+ self.buckets = buckets
411
+
412
+ def __del__(self):
413
+ pywrap_tfe.TFE_MonitoringDeleteBuckets(self.buckets)
414
+
415
+
416
+ class ExponentialBuckets(Buckets):
417
+ """Exponential bucketing strategy.
418
+
419
+ Sets up buckets of the form:
420
+ [-DBL_MAX, ..., scale * growth^i,
421
+ scale * growth_factor^(i + 1), ..., DBL_MAX].
422
+ """
423
+
424
+ __slots__ = []
425
+
426
+ def __init__(self, scale, growth_factor, bucket_count):
427
+ """Creates a new exponential Buckets.
428
+
429
+ Args:
430
+ scale: float
431
+ growth_factor: float
432
+ bucket_count: integer
433
+ """
434
+ super(ExponentialBuckets, self).__init__(
435
+ pywrap_tfe.TFE_MonitoringNewExponentialBuckets(scale, growth_factor,
436
+ bucket_count))
437
+
438
+
439
+ class Sampler(Metric):
440
+ """A stateful class for updating a cumulative histogram metric.
441
+
442
+ This class encapsulates a set of histograms (or a single histogram for a
443
+ label-less metric) configured with a list of increasing bucket boundaries.
444
+ Each histogram is identified by a tuple of labels. The class allows the
445
+ user to add a sample to each histogram value.
446
+ """
447
+
448
+ __slots__ = []
449
+
450
+ def __init__(self, name, buckets, description, *labels):
451
+ """Creates a new Sampler.
452
+
453
+ Args:
454
+ name: name of the new metric.
455
+ buckets: bucketing strategy of the new metric.
456
+ description: description of the new metric.
457
+ *labels: The label list of the new metric.
458
+ """
459
+ super(Sampler, self).__init__('Sampler', _sampler_methods, len(labels),
460
+ name, buckets.buckets, description, *labels)
461
+
462
+ def get_cell(self, *labels):
463
+ """Retrieves the cell."""
464
+ return SamplerCell(super(Sampler, self).get_cell(*labels))
465
+
466
+
467
+ # Keeping track of current MonitoredTimer sections to prevent repetitive
468
+ # counting.
469
+ MonitoredTimerSections = []
470
+
471
+
472
+ class MonitoredTimer(object):
473
+ """A context manager to measure the walltime and increment a Counter cell."""
474
+
475
+ __slots__ = [
476
+ "cell",
477
+ "t",
478
+ "monitored_section_name",
479
+ "_counting",
480
+ "_avoid_repetitive_counting",
481
+ ]
482
+
483
+ def __init__(
484
+ self, cell, monitored_section_name=None, avoid_repetitive_counting=False
485
+ ):
486
+ """Creates a new MonitoredTimer.
487
+
488
+ Args:
489
+ cell: the cell associated with the time metric that will be inremented.
490
+ monitored_section_name: name of action being monitored here.
491
+ avoid_repetitive_counting: when set to True, if already in a monitored
492
+ timer section with the same monitored_section_name, skip counting.
493
+ """
494
+ self.cell = cell
495
+ self.monitored_section_name = monitored_section_name
496
+ self._avoid_repetitive_counting = avoid_repetitive_counting
497
+ self._counting = True
498
+
499
+ def __enter__(self):
500
+ if (
501
+ self._avoid_repetitive_counting
502
+ and self.monitored_section_name
503
+ and self.monitored_section_name in MonitoredTimerSections
504
+ ):
505
+ self._counting = False
506
+ return self
507
+
508
+ self.t = time.time()
509
+ if self.monitored_section_name:
510
+ MonitoredTimerSections.append(self.monitored_section_name)
511
+
512
+ return self
513
+
514
+ def __exit__(self, exception_type, exception_value, traceback):
515
+ del exception_type, exception_value, traceback
516
+ if self._counting:
517
+ micro_seconds = (time.time() - self.t) * 1000000
518
+ self.cell.increase_by(int(micro_seconds))
519
+ if self.monitored_section_name:
520
+ MonitoredTimerSections.remove(self.monitored_section_name)
521
+
522
+
523
+ def monitored_timer(cell):
524
+ """A function decorator for adding MonitoredTimer support.
525
+
526
+ Args:
527
+ cell: the cell associated with the time metric that will be inremented.
528
+ Returns:
529
+ A decorator that measure the function runtime and increment the specified
530
+ counter cell.
531
+ """
532
+
533
+ def actual_decorator(func):
534
+
535
+ @functools.wraps(func)
536
+ def wrapper(*args, **kwargs):
537
+ with MonitoredTimer(cell):
538
+ return func(*args, **kwargs)
539
+
540
+ return wrapper
541
+
542
+ return actual_decorator
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__init__.py ADDED
File without changes
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (200 Bytes). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/atomic_function.cpython-310.pyc ADDED
Binary file (20.3 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/attributes.cpython-310.pyc ADDED
Binary file (4.18 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/autograph_util.cpython-310.pyc ADDED
Binary file (1.55 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/compiler_ir.cpython-310.pyc ADDED
Binary file (4.25 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/composite_tensor_utils.cpython-310.pyc ADDED
Binary file (865 Bytes). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/concrete_function.cpython-310.pyc ADDED
Binary file (56.7 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/eager_function_run.cpython-310.pyc ADDED
Binary file (3.65 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/function_context.cpython-310.pyc ADDED
Binary file (2.75 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/function_type_utils.cpython-310.pyc ADDED
Binary file (14.6 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/polymorphic_function.cpython-310.pyc ADDED
Binary file (55.4 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/saved_model_exported_concrete.cpython-310.pyc ADDED
Binary file (4.11 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/saved_model_utils.cpython-310.pyc ADDED
Binary file (2.58 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/tf_method_target.cpython-310.pyc ADDED
Binary file (1.36 kB). View file
 
videochat2/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/__pycache__/tracing_compilation.cpython-310.pyc ADDED
Binary file (9.05 kB). View file