Gradient calculation error during _backward.

#31
by PerAsperaAd - opened

Hi Phi team,
When trying to calculate training gradients, I got this error:
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/u/ypan5/LESS/less/data_selection/get_info.py", line 221, in
collect_grads(dataloader,
File "/u/ypan5/LESS/less/data_selection/collect_grad_reps.py", line 309, in collect_grads
vectorized_grads = obtain_gradients(model, batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/u/ypan5/LESS/less/data_selection/collect_grad_reps.py", line 113, in obtain_gradients
loss.backward()
File "/u/ypan5/miniconda3/envs/mPhi3/lib/python3.11/site-packages/torch/_tensor.py", line 521, in backward
torch.autograd.backward(
File "/u/ypan5/miniconda3/envs/mPhi3/lib/python3.11/site-packages/torch/autograd/init.py", line 289, in backward
_engine_run_backward(
File "/u/ypan5/miniconda3/envs/mPhi3/lib/python3.11/site-packages/torch/autograd/graph.py", line 769, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/u/ypan5/miniconda3/envs/mPhi3/lib/python3.11/site-packages/torch/autograd/function.py", line 306, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/projects/bdaj/ypan5/modules/transformers_modules/microsoft/Phi-3-small-8k-instruct/1535ae26fb4faada95c6950e8bc6e867cdad6b00/triton_flash_blocksparse_attn.py", line 904, in backward
return _backward(ctx, do, *backward_layout)[:4]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/projects/bdaj/ypan5/modules/transformers_modules/microsoft/Phi-3-small-8k-instruct/1535ae26fb4faada95c6950e8bc6e867cdad6b00/triton_flash_blocksparse_attn.py", line 681, in _backward
delta = torch.empty_like(l)
^^^^^^^^^^^^^^^^^^^
TypeError: empty_like(): argument 'input' (position 1) must be Tensor, not NoneType

I am using one gpu for gradient calculation, any help is appreciated!

PerAsperaAd changed discussion status to closed

Sign up or log in to comment