ZTWHHH commited on
Commit
680b9b8
·
verified ·
1 Parent(s): 6e92363

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. evalkit_tf446/lib/python3.10/site-packages/deepspeed/__init__.py +366 -0
  2. evalkit_tf446/lib/python3.10/site-packages/deepspeed/constants.py +21 -0
  3. evalkit_tf446/lib/python3.10/site-packages/deepspeed/env_report.py +195 -0
  4. evalkit_tf446/lib/python3.10/site-packages/deepspeed/git_version_info.py +31 -0
  5. evalkit_tf446/lib/python3.10/site-packages/deepspeed/git_version_info_installed.py +6 -0
  6. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/__init__.py +7 -0
  7. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__init__.py +5 -0
  8. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/clip_encoder.cpython-310.pyc +0 -0
  9. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_bert.cpython-310.pyc +0 -0
  10. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_bloom.cpython-310.pyc +0 -0
  11. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_gpt.cpython-310.pyc +0 -0
  12. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_llama2.cpython-310.pyc +0 -0
  13. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_megatron_gpt.cpython-310.pyc +0 -0
  14. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_opt.cpython-310.pyc +0 -0
  15. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_transformer.cpython-310.pyc +0 -0
  16. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/clip_encoder.py +77 -0
  17. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_base.py +15 -0
  18. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_bert.py +20 -0
  19. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_bloom.py +20 -0
  20. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_gpt.py +20 -0
  21. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_llama2.py +58 -0
  22. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_megatron_gpt.py +20 -0
  23. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_opt.py +20 -0
  24. evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_transformer.py +191 -0
  25. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/__init__.cpython-310.pyc +0 -0
  26. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/coreviews.cpython-310.pyc +0 -0
  27. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/digraph.cpython-310.pyc +0 -0
  28. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/filters.cpython-310.pyc +0 -0
  29. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/function.cpython-310.pyc +0 -0
  30. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/graph.cpython-310.pyc +0 -0
  31. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/graphviews.cpython-310.pyc +0 -0
  32. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/multidigraph.cpython-310.pyc +0 -0
  33. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/multigraph.cpython-310.pyc +0 -0
  34. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/reportviews.cpython-310.pyc +0 -0
  35. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__init__.py +0 -0
  36. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/__init__.cpython-310.pyc +0 -0
  37. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/dispatch_interface.cpython-310.pyc +0 -0
  38. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/historical_tests.cpython-310.pyc +0 -0
  39. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_coreviews.cpython-310.pyc +0 -0
  40. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_digraph.cpython-310.pyc +0 -0
  41. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_digraph_historical.cpython-310.pyc +0 -0
  42. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_filters.cpython-310.pyc +0 -0
  43. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_function.cpython-310.pyc +0 -0
  44. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_graph.cpython-310.pyc +0 -0
  45. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_graph_historical.cpython-310.pyc +0 -0
  46. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_graphviews.cpython-310.pyc +0 -0
  47. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_multidigraph.cpython-310.pyc +0 -0
  48. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_multigraph.cpython-310.pyc +0 -0
  49. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_reportviews.cpython-310.pyc +0 -0
  50. evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_special.cpython-310.pyc +0 -0
evalkit_tf446/lib/python3.10/site-packages/deepspeed/__init__.py ADDED
@@ -0,0 +1,366 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ import sys
7
+ import types
8
+ import json
9
+ from typing import Optional, Union
10
+ import torch
11
+ from torch.optim import Optimizer
12
+ from torch.optim.lr_scheduler import _LRScheduler
13
+ from packaging import version as pkg_version
14
+
15
+ # Skip Triton import for AMD due to pytorch-triton-rocm module breaking device API in DeepSpeed
16
+ if not (hasattr(torch.version, 'hip') and torch.version.hip is not None):
17
+ try:
18
+ import triton # noqa: F401 # type: ignore
19
+ HAS_TRITON = True
20
+ except ImportError:
21
+ HAS_TRITON = False
22
+ else:
23
+ HAS_TRITON = False
24
+
25
+ from . import ops
26
+ from . import module_inject
27
+
28
+ from .accelerator import get_accelerator
29
+ from .constants import TORCH_DISTRIBUTED_DEFAULT_PORT
30
+ from .runtime.engine import DeepSpeedEngine, DeepSpeedOptimizerCallable, DeepSpeedSchedulerCallable
31
+ from .runtime.engine import ADAM_OPTIMIZER, LAMB_OPTIMIZER
32
+ from .runtime.hybrid_engine import DeepSpeedHybridEngine
33
+ from .runtime.pipe.engine import PipelineEngine
34
+ from .inference.engine import InferenceEngine
35
+ from .inference.config import DeepSpeedInferenceConfig
36
+ from .runtime.lr_schedules import add_tuning_arguments
37
+ from .runtime.config import DeepSpeedConfig, DeepSpeedConfigError
38
+ from .runtime.activation_checkpointing import checkpointing
39
+ from .ops.transformer import DeepSpeedTransformerLayer, DeepSpeedTransformerConfig
40
+ from .module_inject import replace_transformer_layer, revert_transformer_layer
41
+
42
+ from .utils import log_dist, OnDevice, logger
43
+ from .comm.comm import init_distributed
44
+
45
+ from .runtime import zero
46
+ from .runtime.compiler import is_compile_supported
47
+
48
+ from .pipe import PipelineModule
49
+
50
+ from .git_version_info import version, git_hash, git_branch
51
+
52
+
53
+ def _parse_version(version_str):
54
+ '''Parse a version string and extract the major, minor, and patch versions.'''
55
+ ver = pkg_version.parse(version_str)
56
+ return ver.major, ver.minor, ver.micro
57
+
58
+
59
+ # Export version information
60
+ __version__ = version
61
+ __version_major__, __version_minor__, __version_patch__ = _parse_version(__version__)
62
+ __git_hash__ = git_hash
63
+ __git_branch__ = git_branch
64
+
65
+ # Set to torch's distributed package or deepspeed.comm based inside DeepSpeedEngine init
66
+ dist = None
67
+
68
+
69
+ def initialize(args=None,
70
+ model: torch.nn.Module = None,
71
+ optimizer: Optional[Union[Optimizer, DeepSpeedOptimizerCallable]] = None,
72
+ model_parameters: Optional[torch.nn.Module] = None,
73
+ training_data: Optional[torch.utils.data.Dataset] = None,
74
+ lr_scheduler: Optional[Union[_LRScheduler, DeepSpeedSchedulerCallable]] = None,
75
+ distributed_port: int = TORCH_DISTRIBUTED_DEFAULT_PORT,
76
+ mpu=None,
77
+ dist_init_required: Optional[bool] = None,
78
+ collate_fn=None,
79
+ config=None,
80
+ mesh_param=None,
81
+ config_params=None):
82
+ """Initialize the DeepSpeed Engine.
83
+
84
+ Arguments:
85
+ args: an object containing local_rank and deepspeed_config fields.
86
+ This is optional if `config` is passed.
87
+
88
+ model: Required: nn.module class before apply any wrappers
89
+
90
+ optimizer: Optional: a user defined Optimizer or Callable that returns an Optimizer object.
91
+ This overrides any optimizer definition in the DeepSpeed json config.
92
+
93
+ model_parameters: Optional: An iterable of torch.Tensors or dicts.
94
+ Specifies what Tensors should be optimized.
95
+
96
+ training_data: Optional: Dataset of type torch.utils.data.Dataset
97
+
98
+ lr_scheduler: Optional: Learning Rate Scheduler Object or a Callable that takes an Optimizer and returns a Scheduler object.
99
+ The scheduler object should define a get_lr(), step(), state_dict(), and load_state_dict() methods
100
+
101
+ distributed_port: Optional: Master node (rank 0)'s free port that needs to be used for communication during distributed training
102
+
103
+ mpu: Optional: A model parallelism unit object that implements
104
+ get_{model,data}_parallel_{rank,group,world_size}()
105
+
106
+ dist_init_required: Optional: None will auto-initialize torch distributed if needed,
107
+ otherwise the user can force it to be initialized or not via boolean.
108
+
109
+ collate_fn: Optional: Merges a list of samples to form a
110
+ mini-batch of Tensor(s). Used when using batched loading from a
111
+ map-style dataset.
112
+
113
+ config: Optional: Instead of requiring args.deepspeed_config you can pass your deepspeed config
114
+ as an argument instead, as a path or a dictionary.
115
+
116
+ config_params: Optional: Same as `config`, kept for backwards compatibility.
117
+
118
+ Returns:
119
+ A tuple of ``engine``, ``optimizer``, ``training_dataloader``, ``lr_scheduler``
120
+
121
+ * ``engine``: DeepSpeed runtime engine which wraps the client model for distributed training.
122
+
123
+ * ``optimizer``: Wrapped optimizer if a user defined ``optimizer`` is supplied, or if
124
+ optimizer is specified in json config else ``None``.
125
+
126
+ * ``training_dataloader``: DeepSpeed dataloader if ``training_data`` was supplied,
127
+ otherwise ``None``.
128
+
129
+ * ``lr_scheduler``: Wrapped lr scheduler if user ``lr_scheduler`` is passed, or
130
+ if ``lr_scheduler`` specified in JSON configuration. Otherwise ``None``.
131
+ """
132
+ log_dist("DeepSpeed info: version={}, git-hash={}, git-branch={}".format(__version__, __git_hash__,
133
+ __git_branch__),
134
+ ranks=[0])
135
+
136
+ # Disable zero.Init context if it's currently enabled
137
+ zero.partition_parameters.shutdown_init_context()
138
+
139
+ assert model is not None, "deepspeed.initialize requires a model"
140
+
141
+ global dist
142
+ from deepspeed import comm as dist
143
+ dist_backend = get_accelerator().communication_backend_name()
144
+ dist.init_distributed(dist_backend=dist_backend,
145
+ distributed_port=distributed_port,
146
+ dist_init_required=dist_init_required)
147
+
148
+ ##TODO: combine reuse mpu as mesh device and vice versa
149
+ # Set config using config_params for backwards compat
150
+ if config is None and config_params is not None:
151
+ config = config_params
152
+
153
+ mesh_device = None
154
+ if mesh_param:
155
+ logger.info(f"mesh_param to Initialize mesh device: {mesh_param}")
156
+ mesh_device = dist.initialize_mesh_device(mesh_param, ("data_parallel", "sequence_parallel"))
157
+ #if config file has sequence parallelize and data parallelize, then use them to initialize mesh device
158
+ elif config is not None:
159
+ if "sequence_parallel_size" in config and "data_parallel_size" in config:
160
+ logger.info(f"config to Initialize mesh device: {config}")
161
+ mesh_device = dist.initialize_mesh_device((config["data_parallel_size"], config["sequence_parallel_size"]), \
162
+ ("data_parallel", "sequence_parallel"))
163
+
164
+ # Check for deepscale_config for backwards compat
165
+ if hasattr(args, "deepscale_config") and args.deepscale_config is not None:
166
+ logger.warning("************ --deepscale_config is deprecated, please use --deepspeed_config ************")
167
+ if hasattr(args, "deepspeed_config"):
168
+ assert (args.deepspeed_config
169
+ is None), "Not sure how to proceed, we were given both a deepscale_config and deepspeed_config"
170
+ args.deepspeed_config = args.deepscale_config
171
+ args.deepscale_config = None
172
+
173
+ # Check that we have only one config passed
174
+ if hasattr(args, "deepspeed_config") and args.deepspeed_config is not None:
175
+ assert config is None, "Not sure how to proceed, we were given deepspeed configs in the deepspeed arguments and deepspeed.initialize() function call"
176
+ config = args.deepspeed_config
177
+ assert config is not None, "DeepSpeed requires --deepspeed_config to specify configuration file"
178
+ if not isinstance(model, PipelineModule):
179
+ config_class = DeepSpeedConfig(config, mpu, mesh_device=mesh_device)
180
+ if config_class.hybrid_engine.enabled:
181
+ engine = DeepSpeedHybridEngine(args=args,
182
+ model=model,
183
+ optimizer=optimizer,
184
+ model_parameters=model_parameters,
185
+ training_data=training_data,
186
+ lr_scheduler=lr_scheduler,
187
+ mpu=mpu,
188
+ dist_init_required=dist_init_required,
189
+ collate_fn=collate_fn,
190
+ config=config,
191
+ config_class=config_class)
192
+ else:
193
+ engine = DeepSpeedEngine(args=args,
194
+ model=model,
195
+ optimizer=optimizer,
196
+ model_parameters=model_parameters,
197
+ training_data=training_data,
198
+ lr_scheduler=lr_scheduler,
199
+ mpu=mpu,
200
+ dist_init_required=dist_init_required,
201
+ collate_fn=collate_fn,
202
+ config=config,
203
+ mesh_device=mesh_device,
204
+ config_class=config_class)
205
+ else:
206
+ assert mpu is None, "mpu must be None with pipeline parallelism"
207
+ mpu = model.mpu()
208
+ config_class = DeepSpeedConfig(config, mpu)
209
+ engine = PipelineEngine(args=args,
210
+ model=model,
211
+ optimizer=optimizer,
212
+ model_parameters=model_parameters,
213
+ training_data=training_data,
214
+ lr_scheduler=lr_scheduler,
215
+ mpu=mpu,
216
+ dist_init_required=dist_init_required,
217
+ collate_fn=collate_fn,
218
+ config=config,
219
+ config_class=config_class)
220
+
221
+ # Restore zero.Init context if necessary
222
+ zero.partition_parameters.restore_init_context()
223
+
224
+ return_items = [
225
+ engine,
226
+ engine.optimizer,
227
+ engine.training_dataloader,
228
+ engine.lr_scheduler,
229
+ ]
230
+ return tuple(return_items)
231
+
232
+
233
+ def _add_core_arguments(parser):
234
+ r"""Helper (internal) function to update an argument parser with an argument group of the core DeepSpeed arguments.
235
+ The core set of DeepSpeed arguments include the following:
236
+ 1) --deepspeed: boolean flag to enable DeepSpeed
237
+ 2) --deepspeed_config <json file path>: path of a json configuration file to configure DeepSpeed runtime.
238
+
239
+ This is a helper function to the public add_config_arguments()
240
+
241
+ Arguments:
242
+ parser: argument parser
243
+ Return:
244
+ parser: Updated Parser
245
+ """
246
+ group = parser.add_argument_group('DeepSpeed', 'DeepSpeed configurations')
247
+
248
+ group.add_argument('--deepspeed',
249
+ default=False,
250
+ action='store_true',
251
+ help='Enable DeepSpeed (helper flag for user code, no impact on DeepSpeed backend)')
252
+
253
+ group.add_argument('--deepspeed_config', default=None, type=str, help='DeepSpeed json configuration file.')
254
+
255
+ group.add_argument('--deepscale',
256
+ default=False,
257
+ action='store_true',
258
+ help='Deprecated enable DeepSpeed (helper flag for user code, no impact on DeepSpeed backend)')
259
+
260
+ group.add_argument('--deepscale_config',
261
+ default=None,
262
+ type=str,
263
+ help='Deprecated DeepSpeed json configuration file.')
264
+
265
+ return parser
266
+
267
+
268
+ def add_config_arguments(parser):
269
+ r"""Update the argument parser to enabling parsing of DeepSpeed command line arguments.
270
+ The set of DeepSpeed arguments include the following:
271
+ 1) --deepspeed: boolean flag to enable DeepSpeed
272
+ 2) --deepspeed_config <json file path>: path of a json configuration file to configure DeepSpeed runtime.
273
+
274
+ Arguments:
275
+ parser: argument parser
276
+ Return:
277
+ parser: Updated Parser
278
+ """
279
+ parser = _add_core_arguments(parser)
280
+
281
+ return parser
282
+
283
+
284
+ def default_inference_config():
285
+ """
286
+ Return a default DeepSpeed inference configuration dictionary.
287
+ """
288
+ return DeepSpeedInferenceConfig().dict()
289
+
290
+
291
+ def init_inference(model, config=None, **kwargs):
292
+ """Initialize the DeepSpeed InferenceEngine.
293
+
294
+ Description: all four cases are valid and supported in DS init_inference() API.
295
+
296
+ # Case 1: user provides no config and no kwargs. Default config will be used.
297
+
298
+ .. code-block:: python
299
+
300
+ generator.model = deepspeed.init_inference(generator.model)
301
+ string = generator("DeepSpeed is")
302
+ print(string)
303
+
304
+ # Case 2: user provides a config and no kwargs. User supplied config will be used.
305
+
306
+ .. code-block:: python
307
+
308
+ generator.model = deepspeed.init_inference(generator.model, config=config)
309
+ string = generator("DeepSpeed is")
310
+ print(string)
311
+
312
+ # Case 3: user provides no config and uses keyword arguments (kwargs) only.
313
+
314
+ .. code-block:: python
315
+
316
+ generator.model = deepspeed.init_inference(generator.model,
317
+ tensor_parallel={"tp_size": world_size},
318
+ dtype=torch.half,
319
+ replace_with_kernel_inject=True)
320
+ string = generator("DeepSpeed is")
321
+ print(string)
322
+
323
+ # Case 4: user provides config and keyword arguments (kwargs). Both config and kwargs are merged and kwargs take precedence.
324
+
325
+ .. code-block:: python
326
+
327
+ generator.model = deepspeed.init_inference(generator.model, config={"dtype": torch.half}, replace_with_kernel_inject=True)
328
+ string = generator("DeepSpeed is")
329
+ print(string)
330
+
331
+ Arguments:
332
+ model: Required: original nn.module object without any wrappers
333
+
334
+ config: Optional: instead of arguments, you can pass in a DS inference config dict or path to JSON file
335
+
336
+ Returns:
337
+ A deepspeed.InferenceEngine wrapped model.
338
+ """
339
+ log_dist("DeepSpeed info: version={}, git-hash={}, git-branch={}".format(__version__, __git_hash__,
340
+ __git_branch__),
341
+ ranks=[0])
342
+
343
+ # Load config_dict from config first
344
+ if config is None:
345
+ config = {}
346
+ if isinstance(config, str):
347
+ with open(config, "r") as f:
348
+ config_dict = json.load(f)
349
+ elif isinstance(config, dict):
350
+ config_dict = config
351
+ else:
352
+ raise ValueError(f"'config' argument expected string or dictionary, got {type(config)}")
353
+
354
+ # Update with values from kwargs, ensuring no conflicting overlap between config and kwargs
355
+ overlap_keys = set(config_dict.keys()).intersection(kwargs.keys())
356
+ # If there is overlap, error out if values are different
357
+ for key in overlap_keys:
358
+ if config_dict[key] != kwargs[key]:
359
+ raise ValueError(f"Conflicting argument '{key}' in 'config':{config_dict[key]} and kwargs:{kwargs[key]}")
360
+ config_dict.update(kwargs)
361
+
362
+ ds_inference_config = DeepSpeedInferenceConfig(**config_dict)
363
+
364
+ engine = InferenceEngine(model, config=ds_inference_config)
365
+
366
+ return engine
evalkit_tf446/lib/python3.10/site-packages/deepspeed/constants.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ import os
7
+ from datetime import timedelta
8
+
9
+ #############################################
10
+ # Torch distributed constants
11
+ #############################################
12
+ TORCH_DISTRIBUTED_DEFAULT_PORT = 29500
13
+
14
+ # Default process group wide timeout, if applicable.
15
+ # This only applies to the gloo and nccl backends
16
+ # (only if NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1).
17
+ # To make an attempt at backwards compatibility with THD, we use an
18
+ # extraordinarily high default timeout, given that THD did not have timeouts.
19
+ default_pg_timeout = timedelta(minutes=int(os.getenv("DEEPSPEED_TIMEOUT", default=30)))
20
+ INFERENCE_GENERIC_MODE = 'generic'
21
+ INFERENCE_SPECIALIZED_MODE = 'specialized'
evalkit_tf446/lib/python3.10/site-packages/deepspeed/env_report.py ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ import os
7
+ import torch
8
+ import deepspeed
9
+ import subprocess
10
+ import argparse
11
+ from .ops.op_builder.all_ops import ALL_OPS
12
+ from .git_version_info import installed_ops, torch_info, accelerator_name
13
+ from deepspeed.accelerator import get_accelerator
14
+
15
+ GREEN = '\033[92m'
16
+ RED = '\033[91m'
17
+ YELLOW = '\033[93m'
18
+ END = '\033[0m'
19
+ SUCCESS = f"{GREEN} [SUCCESS] {END}"
20
+ OKAY = f"{GREEN}[OKAY]{END}"
21
+ WARNING = f"{YELLOW}[WARNING]{END}"
22
+ FAIL = f'{RED}[FAIL]{END}'
23
+ INFO = '[INFO]'
24
+
25
+ color_len = len(GREEN) + len(END)
26
+ okay = f"{GREEN}[OKAY]{END}"
27
+ warning = f"{YELLOW}[WARNING]{END}"
28
+
29
+
30
+ def op_report(verbose=True):
31
+ max_dots = 23
32
+ max_dots2 = 11
33
+ h = ["op name", "installed", "compatible"]
34
+ print("-" * (max_dots + max_dots2 + len(h[0]) + len(h[1])))
35
+ print("DeepSpeed C++/CUDA extension op report")
36
+ print("-" * (max_dots + max_dots2 + len(h[0]) + len(h[1])))
37
+
38
+ print("NOTE: Ops not installed will be just-in-time (JIT) compiled at\n"
39
+ " runtime if needed. Op compatibility means that your system\n"
40
+ " meet the required dependencies to JIT install the op.")
41
+
42
+ print("-" * (max_dots + max_dots2 + len(h[0]) + len(h[1])))
43
+ print("JIT compiled ops requires ninja")
44
+ ninja_status = OKAY if ninja_installed() else FAIL
45
+ print('ninja', "." * (max_dots - 5), ninja_status)
46
+ print("-" * (max_dots + max_dots2 + len(h[0]) + len(h[1])))
47
+ print(h[0], "." * (max_dots - len(h[0])), h[1], "." * (max_dots2 - len(h[1])), h[2])
48
+ print("-" * (max_dots + max_dots2 + len(h[0]) + len(h[1])))
49
+ installed = f"{GREEN}[YES]{END}"
50
+ no = f"{YELLOW}[NO]{END}"
51
+ for op_name, builder in ALL_OPS.items():
52
+ dots = "." * (max_dots - len(op_name))
53
+ is_compatible = OKAY if builder.is_compatible(verbose) else no
54
+ is_installed = installed if installed_ops.get(op_name,
55
+ False) and accelerator_name == get_accelerator()._name else no
56
+ dots2 = '.' * ((len(h[1]) + (max_dots2 - len(h[1]))) - (len(is_installed) - color_len))
57
+ print(op_name, dots, is_installed, dots2, is_compatible)
58
+ print("-" * (max_dots + max_dots2 + len(h[0]) + len(h[1])))
59
+
60
+
61
+ def ninja_installed():
62
+ try:
63
+ import ninja # noqa: F401 # type: ignore
64
+ except ImportError:
65
+ return False
66
+ return True
67
+
68
+
69
+ def nvcc_version():
70
+ import torch.utils.cpp_extension
71
+ cuda_home = torch.utils.cpp_extension.CUDA_HOME
72
+ if cuda_home is None:
73
+ return f"{RED} [FAIL] cannot find CUDA_HOME via torch.utils.cpp_extension.CUDA_HOME={torch.utils.cpp_extension.CUDA_HOME} {END}"
74
+ try:
75
+ output = subprocess.check_output([cuda_home + "/bin/nvcc", "-V"], universal_newlines=True)
76
+ except FileNotFoundError:
77
+ return f"{RED} [FAIL] nvcc missing {END}"
78
+ output_split = output.split()
79
+ release_idx = output_split.index("release")
80
+ release = output_split[release_idx + 1].replace(',', '').split(".")
81
+ return ".".join(release)
82
+
83
+
84
+ def installed_cann_path():
85
+ if "ASCEND_HOME_PATH" in os.environ or os.path.exists(os.environ["ASCEND_HOME_PATH"]):
86
+ return os.environ["ASCEND_HOME_PATH"]
87
+ return None
88
+
89
+
90
+ def installed_cann_version():
91
+ import re
92
+ ascend_path = installed_cann_path()
93
+ if ascend_path is None:
94
+ return f"CANN_HOME does not exist, unable to compile NPU op(s)"
95
+ cann_version = ""
96
+ for dirpath, _, filenames in os.walk(os.path.realpath(ascend_path)):
97
+ if cann_version:
98
+ break
99
+ install_files = [file for file in filenames if re.match(r"ascend_.*_install\.info", file)]
100
+ if install_files:
101
+ filepath = os.path.join(dirpath, install_files[0])
102
+ with open(filepath, "r") as f:
103
+ for line in f:
104
+ if line.find("version") != -1:
105
+ cann_version = line.strip().split("=")[-1]
106
+ break
107
+ return cann_version
108
+
109
+
110
+ def get_shm_size():
111
+ try:
112
+ shm_stats = os.statvfs('/dev/shm')
113
+ except (OSError, FileNotFoundError, ValueError, AttributeError):
114
+ return "UNKNOWN", None
115
+
116
+ shm_size = shm_stats.f_frsize * shm_stats.f_blocks
117
+ shm_hbytes = human_readable_size(shm_size)
118
+ warn = []
119
+ if shm_size < 512 * 1024**2:
120
+ warn.append(
121
+ f" {YELLOW} [WARNING] /dev/shm size might be too small, if running in docker increase to at least --shm-size='1gb' {END}"
122
+ )
123
+ if get_accelerator().communication_backend_name() == "nccl":
124
+ warn.append(
125
+ f" {YELLOW} [WARNING] see more details about NCCL requirements: https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html#sharing-data {END}"
126
+ )
127
+ return shm_hbytes, warn
128
+
129
+
130
+ def human_readable_size(size):
131
+ units = ['B', 'KB', 'MB', 'GB', 'TB']
132
+ i = 0
133
+ while size >= 1024 and i < len(units) - 1:
134
+ size /= 1024
135
+ i += 1
136
+ return f'{size:.2f} {units[i]}'
137
+
138
+
139
+ def debug_report():
140
+ max_dots = 33
141
+
142
+ report = [("torch install path", torch.__path__), ("torch version", torch.__version__),
143
+ ("deepspeed install path", deepspeed.__path__),
144
+ ("deepspeed info", f"{deepspeed.__version__}, {deepspeed.__git_hash__}, {deepspeed.__git_branch__}")]
145
+ if get_accelerator().device_name() == 'cuda':
146
+ hip_version = getattr(torch.version, "hip", None)
147
+ report.extend([("torch cuda version", torch.version.cuda), ("torch hip version", hip_version),
148
+ ("nvcc version", (None if hip_version else nvcc_version())),
149
+ ("deepspeed wheel compiled w.", f"torch {torch_info['version']}, " +
150
+ (f"hip {torch_info['hip_version']}" if hip_version else f"cuda {torch_info['cuda_version']}"))
151
+ ])
152
+ elif get_accelerator().device_name() == 'npu':
153
+ import torch_npu
154
+ report.extend([("deepspeed wheel compiled w.", f"torch {torch_info['version']}"),
155
+ ("torch_npu install path", torch_npu.__path__), ("torch_npu version", torch_npu.__version__),
156
+ ("ascend_cann version", installed_cann_version())])
157
+ else:
158
+ report.extend([("deepspeed wheel compiled w.", f"torch {torch_info['version']} ")])
159
+
160
+ report.append(("shared memory (/dev/shm) size", get_shm_size()))
161
+
162
+ print("DeepSpeed general environment info:")
163
+ for name, value in report:
164
+ warns = []
165
+ if isinstance(value, tuple):
166
+ value, warns = value
167
+ print(name, "." * (max_dots - len(name)), value)
168
+ if warns:
169
+ for warn in warns:
170
+ print(warn)
171
+
172
+
173
+ def parse_arguments():
174
+ parser = argparse.ArgumentParser()
175
+ parser.add_argument('--hide_operator_status',
176
+ action='store_true',
177
+ help='Suppress display of installation and compatibility statuses of DeepSpeed operators. ')
178
+ parser.add_argument('--hide_errors_and_warnings', action='store_true', help='Suppress warning and error messages.')
179
+ args = parser.parse_args()
180
+ return args
181
+
182
+
183
+ def main(hide_operator_status=False, hide_errors_and_warnings=False):
184
+ if not hide_operator_status:
185
+ op_report(verbose=not hide_errors_and_warnings)
186
+ debug_report()
187
+
188
+
189
+ def cli_main():
190
+ args = parse_arguments()
191
+ main(hide_operator_status=args.hide_operator_status, hide_errors_and_warnings=args.hide_errors_and_warnings)
192
+
193
+
194
+ if __name__ == "__main__":
195
+ main()
evalkit_tf446/lib/python3.10/site-packages/deepspeed/git_version_info.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ try:
7
+ # This is populated by setup.py
8
+ from .git_version_info_installed import * # noqa: F401 # type: ignore
9
+ except ModuleNotFoundError:
10
+ import os
11
+ if os.path.isfile('version.txt'):
12
+ # Will be missing from checkouts that haven't been installed (e.g., readthedocs)
13
+ version = open('version.txt', 'r').read().strip()
14
+ else:
15
+ version = "0.0.0"
16
+ git_hash = '[none]'
17
+ git_branch = '[none]'
18
+
19
+ from .ops.op_builder.all_ops import ALL_OPS
20
+ installed_ops = dict.fromkeys(ALL_OPS.keys(), False)
21
+ accelerator_name = ""
22
+ torch_info = {'version': "0.0", "cuda_version": "0.0", "hip_version": "0.0"}
23
+
24
+ # compatible_ops list is recreated for each launch
25
+ from .ops.op_builder.all_ops import ALL_OPS
26
+
27
+ compatible_ops = dict.fromkeys(ALL_OPS.keys(), False)
28
+ for op_name, builder in ALL_OPS.items():
29
+ op_compatible = builder.is_compatible()
30
+ compatible_ops[op_name] = op_compatible
31
+ compatible_ops["deepspeed_not_implemented"] = False
evalkit_tf446/lib/python3.10/site-packages/deepspeed/git_version_info_installed.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ version='0.15.4'
2
+ git_hash='unknown'
3
+ git_branch='unknown'
4
+ installed_ops={'async_io': False, 'fused_adam': False, 'cpu_adam': False, 'cpu_adagrad': False, 'cpu_lion': False, 'evoformer_attn': False, 'fp_quantizer': False, 'fused_lamb': False, 'fused_lion': False, 'gds': False, 'transformer_inference': False, 'inference_core_ops': False, 'cutlass_ops': False, 'quantizer': False, 'ragged_device_ops': False, 'ragged_ops': False, 'random_ltd': False, 'sparse_attn': False, 'spatial_inference': False, 'transformer': False, 'stochastic_transformer': False}
5
+ accelerator_name='cuda'
6
+ torch_info={'version': '0.0', 'bf16_support': False, 'cuda_version': '0.0', 'nccl_version': '0.0', 'hip_version': '0.0'}
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/__init__.py ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ from .transformers.ds_transformer import DeepSpeedTransformerInference
7
+ from .transformers.clip_encoder import DSClipEncoder
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__init__.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+ '''Copyright The Microsoft DeepSpeed Team'''
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/clip_encoder.cpython-310.pyc ADDED
Binary file (2.8 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_bert.cpython-310.pyc ADDED
Binary file (880 Bytes). View file
 
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_bloom.cpython-310.pyc ADDED
Binary file (884 Bytes). View file
 
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_gpt.cpython-310.pyc ADDED
Binary file (876 Bytes). View file
 
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_llama2.cpython-310.pyc ADDED
Binary file (1.66 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_megatron_gpt.cpython-310.pyc ADDED
Binary file (910 Bytes). View file
 
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_opt.cpython-310.pyc ADDED
Binary file (876 Bytes). View file
 
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/__pycache__/ds_transformer.cpython-310.pyc ADDED
Binary file (5.59 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/clip_encoder.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ import torch
7
+ from deepspeed.accelerator import get_accelerator
8
+ from ..features.cuda_graph import CUDAGraph
9
+
10
+
11
+ class DSClipEncoder(CUDAGraph, torch.nn.Module):
12
+
13
+ def __init__(self, enc, enable_cuda_graph=False):
14
+ super().__init__(enable_cuda_graph=enable_cuda_graph)
15
+ enc.text_model._build_causal_attention_mask = self._build_causal_attention_mask
16
+ self.enc = enc
17
+ self.device = self.enc.device
18
+ self.dtype = self.enc.dtype
19
+ self.cuda_graph_created = [False, False]
20
+ self.static_inputs = [None, None]
21
+ self.static_kwargs = [None, None]
22
+ self.static_output = [None, None]
23
+ self._cuda_graphs = [None, None]
24
+ self.iter = 0
25
+ self.config = self.enc.config
26
+
27
+ def _build_causal_attention_mask(self, bsz, seq_len, dtype):
28
+ mask = torch.empty(bsz, seq_len, seq_len, dtype=dtype, device=get_accelerator().current_device_name())
29
+ mask.fill_(torch.tensor(torch.finfo(dtype).min))
30
+ mask.triu_(1)
31
+ mask = mask.unsqueeze(1)
32
+ return mask
33
+
34
+ def _graph_replay(self, *inputs, **kwargs):
35
+ for i in range(len(inputs)):
36
+ if torch.is_tensor(inputs[i]):
37
+ self.static_inputs[self.iter][i].copy_(inputs[i])
38
+ for k in kwargs:
39
+ if torch.is_tensor(kwargs[k]):
40
+ self.static_kwargs[self.iter][k].copy_(kwargs[k])
41
+ get_accelerator().replay_graph(self._cuda_graphs[self.iter])
42
+ return self.static_output[self.iter]
43
+
44
+ def forward(self, *inputs, **kwargs):
45
+ if self.enable_cuda_graph:
46
+ if self.cuda_graph_created[self.iter]:
47
+ outputs = self._graph_replay(*inputs, **kwargs)
48
+ else:
49
+ self._create_cuda_graph(*inputs, **kwargs)
50
+ outputs = self._graph_replay(*inputs, **kwargs)
51
+ self.iter = (self.iter + 1) % 2
52
+ return outputs
53
+ else:
54
+ return self.enc(*inputs, **kwargs)
55
+
56
+ def _create_cuda_graph(self, *inputs, **kwargs):
57
+ # warmup to create the workspace and cublas handle
58
+ cuda_stream = torch.cuda.Stream()
59
+ cuda_stream.wait_stream(torch.cuda.current_stream())
60
+ with torch.cuda.stream(cuda_stream):
61
+ for i in range(3):
62
+ ret = self._forward(*inputs, **kwargs)
63
+ torch.cuda.current_stream().wait_stream(cuda_stream)
64
+
65
+ # create cuda_graph and assign static_inputs and static_outputs
66
+ self._cuda_graphs[self.iter] = get_accelerator().create_graph()
67
+ self.static_inputs[self.iter] = inputs
68
+ self.static_kwargs[self.iter] = kwargs
69
+
70
+ with get_accelerator().capture_to_graph(self._cuda_graphs[self.iter]):
71
+ self.static_output[self.iter] = self._forward(*self.static_inputs[self.iter],
72
+ **self.static_kwargs[self.iter])
73
+
74
+ self.cuda_graph_created[self.iter] = True
75
+
76
+ def _forward(self, *inputs, **kwargs):
77
+ return self.enc(*inputs, **kwargs)
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_base.py ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ import torch.nn as nn
7
+
8
+
9
+ class DeepSpeedTransformerBase(nn.module):
10
+
11
+ def __init__(self):
12
+ pass
13
+
14
+ # this would be the new clean base class that will replace DeepSpeedTransformerInference.
15
+ # we currently don't know how this will look like but keeping it here as a placeholder.
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_bert.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ from deepspeed.model_implementations.transformers.ds_transformer import DeepSpeedTransformerInference
7
+
8
+
9
+ class DeepSpeedBERTInference(DeepSpeedTransformerInference):
10
+ """Initialize the DeepSpeed BERT Transformer Layer.
11
+ """
12
+
13
+ def __init__(self,
14
+ config,
15
+ mp_group=None,
16
+ quantize_scales=None,
17
+ quantize_groups=1,
18
+ merge_count=1,
19
+ mlp_extra_grouping=False):
20
+ super().__init__(config, mp_group, quantize_scales, quantize_groups, merge_count, mlp_extra_grouping)
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_bloom.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ from deepspeed.model_implementations.transformers.ds_transformer import DeepSpeedTransformerInference
7
+
8
+
9
+ class DeepSpeedBloomInference(DeepSpeedTransformerInference):
10
+ """Initialize the DeepSpeed Bloom Transformer Layer.
11
+ """
12
+
13
+ def __init__(self,
14
+ config,
15
+ mp_group=None,
16
+ quantize_scales=None,
17
+ quantize_groups=1,
18
+ merge_count=1,
19
+ mlp_extra_grouping=False):
20
+ super().__init__(config, mp_group, quantize_scales, quantize_groups, merge_count, mlp_extra_grouping)
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_gpt.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ from deepspeed.model_implementations.transformers.ds_transformer import DeepSpeedTransformerInference
7
+
8
+
9
+ class DeepSpeedGPTInference(DeepSpeedTransformerInference):
10
+ """Initialize the DeepSpeed GPT Transformer Layer.
11
+ """
12
+
13
+ def __init__(self,
14
+ config,
15
+ mp_group=None,
16
+ quantize_scales=None,
17
+ quantize_groups=1,
18
+ merge_count=1,
19
+ mlp_extra_grouping=False):
20
+ super().__init__(config, mp_group, quantize_scales, quantize_groups, merge_count, mlp_extra_grouping)
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_llama2.py ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ import torch
7
+ from deepspeed.model_implementations.transformers.ds_transformer import DeepSpeedTransformerInference
8
+
9
+
10
+ class DeepSpeedLlama2Inference(DeepSpeedTransformerInference):
11
+ """Initialize the DeepSpeed OPT Transformer Layer.
12
+ """
13
+
14
+ def __init__(self,
15
+ config,
16
+ mp_group=None,
17
+ quantize_scales=None,
18
+ quantize_groups=1,
19
+ merge_count=1,
20
+ mlp_extra_grouping=False):
21
+ super().__init__(config, mp_group, quantize_scales, quantize_groups, merge_count, mlp_extra_grouping)
22
+
23
+ def forward(self, *args, **kwargs):
24
+
25
+ input = args[0]
26
+ input_mask = None
27
+ get_present = True
28
+
29
+ self.allocate_workspace(input.size())
30
+
31
+ # We set the prev key/value to None when there is a prompt
32
+ if input.shape[1] > 1:
33
+ self.layer_past = None
34
+ layer_past = self.layer_past
35
+
36
+ input_type = input.dtype
37
+
38
+ if (self.config.dtype in [torch.float16, torch.bfloat16, torch.int8]) \
39
+ and input.dtype == torch.float:
40
+ target_dtype = torch.half if self.dtype == torch.int8 else self.dtype
41
+ input = input.to(target_dtype)
42
+
43
+ with torch.no_grad():
44
+ attention_output, key, value, context_outputtn_ctx, inp_norm = \
45
+ self.attention(input,
46
+ input_mask,
47
+ None,
48
+ layer_past,
49
+ get_present,
50
+ None, None, None,
51
+ self.norm_w,
52
+ self.norm_b,
53
+ None)
54
+ self.layer_past = (key, value)
55
+ output = self.mlp(attention_output, input, inp_norm, self.attention.attn_ob)
56
+
57
+ output = output.to(input_type)
58
+ return output
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_megatron_gpt.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ from deepspeed.model_implementations.transformers.ds_transformer import DeepSpeedTransformerInference
7
+
8
+
9
+ class DeepSpeedMegatronGPTInference(DeepSpeedTransformerInference):
10
+ """Initialize the DeepSpeed Megatron GPT Transformer Layer.
11
+ """
12
+
13
+ def __init__(self,
14
+ config,
15
+ mp_group=None,
16
+ quantize_scales=None,
17
+ quantize_groups=1,
18
+ merge_count=1,
19
+ mlp_extra_grouping=False):
20
+ super().__init__(config, mp_group, quantize_scales, quantize_groups, merge_count, mlp_extra_grouping)
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_opt.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ from deepspeed.model_implementations.transformers.ds_transformer import DeepSpeedTransformerInference
7
+
8
+
9
+ class DeepSpeedOPTInference(DeepSpeedTransformerInference):
10
+ """Initialize the DeepSpeed OPT Transformer Layer.
11
+ """
12
+
13
+ def __init__(self,
14
+ config,
15
+ mp_group=None,
16
+ quantize_scales=None,
17
+ quantize_groups=1,
18
+ merge_count=1,
19
+ mlp_extra_grouping=False):
20
+ super().__init__(config, mp_group, quantize_scales, quantize_groups, merge_count, mlp_extra_grouping)
evalkit_tf446/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_transformer.py ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Microsoft Corporation.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+
4
+ # DeepSpeed Team
5
+
6
+ import torch
7
+ import torch.nn as nn
8
+ from deepspeed import comm as dist
9
+ from deepspeed.ops.transformer.inference.op_binding.layer_norm import LayerNormOp
10
+ from deepspeed.utils.logging import log_dist
11
+
12
+ from deepspeed.ops.transformer.inference.ds_mlp import DeepSpeedMLP
13
+ from deepspeed.ops.transformer.inference.ds_attention import DeepSpeedSelfAttention, BloomSelfAttention
14
+ from deepspeed.ops.transformer.inference.op_binding.workspace import WorkspaceOp
15
+ from deepspeed.accelerator import get_accelerator
16
+ import deepspeed
17
+ if deepspeed.HAS_TRITON:
18
+ from deepspeed.ops.transformer.inference.triton.mlp import TritonMLP
19
+ from deepspeed.ops.transformer.inference.triton.attention import TritonSelfAttention
20
+
21
+
22
+ class DeepSpeedTransformerInference(nn.Module):
23
+ """Initialize the DeepSpeed Transformer Layer.
24
+ Arguments:
25
+ layer_id: The layer index starting from 0, e.g. if model has 24 transformer layers,
26
+ layer_id will be 0,1,2...23 when each layer object is instantiated
27
+ config: An object of DeepSpeedInferenceConfig
28
+ mp_group: Model parallelism group initialized on the modeling side.
29
+ quantize_scales: This argument groups all the layers' scales used for quantization
30
+ quantize_groups: Number of groups used for quantizing the model
31
+ merge_count: Shows the number of model-parallel checkpoints merged before running inference.
32
+ We use this argument to control the quantization scale for the model parameters if a bigger
33
+ quantize-grouping than 1 is used.
34
+ mlp_extra_grouping: This flag is used to show a 2x higher number of groups used for the MLP part
35
+ of a Transformer layer. We use this feature for quantization to reduce the convergence impact
36
+ for specific downstream tasks.
37
+ """
38
+ layer_id = 0
39
+ workspace = None
40
+
41
+ def __init__(self,
42
+ config,
43
+ mp_group=None,
44
+ quantize_scales=None,
45
+ quantize_groups=1,
46
+ merge_count=1,
47
+ mlp_extra_grouping=False):
48
+ super(DeepSpeedTransformerInference, self).__init__()
49
+
50
+ self.config = config
51
+ self.config.layer_id = DeepSpeedTransformerInference.layer_id
52
+ DeepSpeedTransformerInference.layer_id += 1
53
+
54
+ data_type = torch.half if self.config.dtype == torch.int8 else self.config.dtype
55
+
56
+ if DeepSpeedTransformerInference.layer_id == 1:
57
+ log_dist(f"DeepSpeed-Inference config: {self.config.__dict__}", [0])
58
+ if deepspeed.HAS_TRITON and self.config.use_triton:
59
+ log_dist(f"Injecting Triton kernels ...", [0])
60
+
61
+ if self.config.bigscience_bloom:
62
+ self.attention = BloomSelfAttention(self.config, mp_group, quantize_scales, quantize_groups, merge_count)
63
+ assert not self.config.use_triton
64
+ else:
65
+ if deepspeed.HAS_TRITON and self.config.use_triton:
66
+ self.attention = TritonSelfAttention(self.config)
67
+ else:
68
+ self.attention = DeepSpeedSelfAttention(self.config, mp_group, quantize_scales, quantize_groups,
69
+ merge_count)
70
+
71
+ if deepspeed.HAS_TRITON and self.config.use_triton:
72
+ self.mlp = TritonMLP(self.config)
73
+ else:
74
+ self.mlp = DeepSpeedMLP(self.config, mp_group, quantize_scales, quantize_groups, merge_count,
75
+ mlp_extra_grouping)
76
+
77
+ device = get_accelerator().current_device_name() # if config.bigscience_bloom else 'cpu'
78
+ if self.config.set_empty_params:
79
+ self.norm_w = None
80
+ self.norm_b = None
81
+ else:
82
+ self.norm_w = nn.Parameter(torch.empty(self.config.hidden_size, dtype=data_type, device=device),
83
+ requires_grad=False)
84
+ self.norm_b = nn.Parameter(torch.empty(self.config.hidden_size, dtype=data_type, device=device),
85
+ requires_grad=False)
86
+ self.layer_past = None
87
+ self.layer_norm = LayerNormOp()
88
+ if DeepSpeedTransformerInference.workspace is None:
89
+ DeepSpeedTransformerInference.workspace = WorkspaceOp(self.config)
90
+ self._should_allocate_workspace = True
91
+
92
+ def allocate_workspace(self, size):
93
+ # Allocate memory only on first layer forward
94
+ if self.config.layer_id == 0 and self._should_allocate_workspace:
95
+ DeepSpeedTransformerInference.workspace.allocate_workspace(
96
+ self.config.hidden_size, self.config.heads, size[1], size[0], DeepSpeedTransformerInference.layer_id,
97
+ self.config.mp_size, self.config.bigscience_bloom,
98
+ dist.get_rank() if dist.is_initialized() else 0, self.config.max_out_tokens,
99
+ self.config.min_out_tokens)
100
+ self._should_allocate_workspace = False
101
+
102
+ @classmethod
103
+ def reset_cache(cls):
104
+ if cls.workspace is not None:
105
+ cls.workspace.reset_cache()
106
+
107
+ def forward(
108
+ self,
109
+ input=None,
110
+ input_mask=None,
111
+ attention_mask=None,
112
+ attn_mask=None,
113
+ head_mask=None,
114
+ layer_past=None,
115
+ get_key_value=False,
116
+ get_present=False,
117
+ encoder_output=None,
118
+ enc_dec_attn_mask=None,
119
+ x=None,
120
+ encoder_hidden_states=None,
121
+ encoder_attention_mask=None,
122
+ use_cache=False,
123
+ alibi=None,
124
+ output_attentions=False,
125
+ # TODO(arashb): 'layer_head_mask' and 'past_key_value' are only added to satisfy the OPT models API.
126
+ # This needs to be redesigned later!
127
+ layer_head_mask=None,
128
+ past_key_value=None,
129
+ **kwargs):
130
+
131
+ if x is not None:
132
+ input = x
133
+ if "hidden_states" in kwargs:
134
+ input = kwargs["hidden_states"]
135
+
136
+ input_mask = (input_mask if attn_mask is None else attn_mask) if attention_mask is None else attention_mask
137
+
138
+ self.allocate_workspace(input.size())
139
+
140
+ get_present = (get_present or get_key_value or use_cache)
141
+ input_mask = input_mask if attention_mask is None else attention_mask
142
+
143
+ # We set the prev key/value to None when there is a prompt
144
+ if input.shape[1] > 1:
145
+ self.layer_past = None
146
+ layer_past = layer_past if layer_past is not None else self.layer_past
147
+ head_mask = layer_head_mask if layer_head_mask is not None else head_mask
148
+
149
+ attn_mask = None
150
+ if isinstance(input, tuple):
151
+ attn_mask = input[1]
152
+ input = input[0]
153
+ input_type = input.dtype
154
+
155
+ if (self.config.dtype in [torch.float16, torch.bfloat16, torch.int8]) \
156
+ and input.dtype == torch.float:
157
+ target_dtype = torch.half if self.config.dtype == torch.int8 else self.config.dtype
158
+ input = input.to(target_dtype)
159
+
160
+ with torch.no_grad():
161
+ attention_output, key, value, context_outputtn_ctx, inp_norm = \
162
+ self.attention(input,
163
+ input_mask,
164
+ head_mask,
165
+ layer_past,
166
+ get_present,
167
+ encoder_hidden_states,
168
+ encoder_attention_mask,
169
+ output_attentions,
170
+ self.norm_w,
171
+ self.norm_b,
172
+ alibi,
173
+ **kwargs)
174
+
175
+ presents = (key, value)
176
+ self.layer_past = presents if layer_past is None else None
177
+ output = self.mlp(attention_output, input, inp_norm, self.attention.attn_ob)
178
+
179
+ if not self.config.pre_layer_norm:
180
+ output = self.layer_norm(output, self.norm_w, self.norm_b, self.config.epsilon)
181
+
182
+ output = output.to(input_type)
183
+ if get_present:
184
+ output = (output, presents)
185
+
186
+ if self.config.return_single_tuple:
187
+ return (output, )
188
+ elif self.config.return_tuple:
189
+ return output if type(output) is tuple else (output, attn_mask)
190
+ else:
191
+ return output
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (579 Bytes). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/coreviews.cpython-310.pyc ADDED
Binary file (16.4 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/digraph.cpython-310.pyc ADDED
Binary file (46.6 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/filters.cpython-310.pyc ADDED
Binary file (5.02 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/function.cpython-310.pyc ADDED
Binary file (39.4 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/graph.cpython-310.pyc ADDED
Binary file (69.8 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/graphviews.cpython-310.pyc ADDED
Binary file (8.13 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/multidigraph.cpython-310.pyc ADDED
Binary file (36 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/multigraph.cpython-310.pyc ADDED
Binary file (46.3 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/__pycache__/reportviews.cpython-310.pyc ADDED
Binary file (49.2 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__init__.py ADDED
File without changes
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (181 Bytes). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/dispatch_interface.cpython-310.pyc ADDED
Binary file (5.5 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/historical_tests.cpython-310.pyc ADDED
Binary file (14 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_coreviews.cpython-310.pyc ADDED
Binary file (13.4 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_digraph.cpython-310.pyc ADDED
Binary file (13.2 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_digraph_historical.cpython-310.pyc ADDED
Binary file (4.88 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_filters.cpython-310.pyc ADDED
Binary file (5.03 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_function.cpython-310.pyc ADDED
Binary file (27.5 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_graph.cpython-310.pyc ADDED
Binary file (31.7 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_graph_historical.cpython-310.pyc ADDED
Binary file (698 Bytes). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_graphviews.cpython-310.pyc ADDED
Binary file (13.5 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_multidigraph.cpython-310.pyc ADDED
Binary file (14.7 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_multigraph.cpython-310.pyc ADDED
Binary file (17.6 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_reportviews.cpython-310.pyc ADDED
Binary file (41.6 kB). View file
 
evalkit_tf446/lib/python3.10/site-packages/networkx/classes/tests/__pycache__/test_special.cpython-310.pyc ADDED
Binary file (5.17 kB). View file