Buckets:
| # SGD | |
| Stochastic gradient descent (SGD) is a basic gradient descent optimizer to minimize loss given a set of model parameters and updates the parameters in the opposite direction of the gradient. The update is performed on a randomly sampled mini-batch of data from the dataset. | |
| bitsandbytes also supports momentum and Nesterov momentum to accelerate SGD by adding a weighted average of past gradients to the current gradient. | |
| ## SGD[[api-class]][[bitsandbytes.optim.SGD]] | |
| #### bitsandbytes.optim.SGD[[bitsandbytes.optim.SGD]] | |
| [Source](https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/bitsandbytes/optim/sgd.py#L8) | |
| __init__bitsandbytes.optim.SGD.__init__https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/bitsandbytes/optim/sgd.py#L9[{"name": "params", "val": ""}, {"name": "lr", "val": ""}, {"name": "momentum", "val": " = 0"}, {"name": "dampening", "val": " = 0"}, {"name": "weight_decay", "val": " = 0"}, {"name": "nesterov", "val": " = False"}, {"name": "optim_bits", "val": " = 32"}, {"name": "args", "val": " = None"}, {"name": "min_8bit_size", "val": " = 4096"}]- **params** (`torch.tensor`) -- | |
| The input parameters to optimize. | |
| - **lr** (`float`) -- | |
| The learning rate. | |
| - **momentum** (`float`, defaults to 0) -- | |
| The momentum value speeds up the optimizer by taking bigger steps. | |
| - **dampening** (`float`, defaults to 0) -- | |
| The dampening value reduces the momentum of the optimizer. | |
| - **weight_decay** (`float`, defaults to 0.0) -- | |
| The weight decay value for the optimizer. | |
| - **nesterov** (`bool`, defaults to `False`) -- | |
| Whether to use Nesterov momentum. | |
| - **optim_bits** (`int`, defaults to 32) -- | |
| The number of bits of the optimizer state. | |
| - **args** (`object`, defaults to `None`) -- | |
| An object with additional arguments. | |
| - **min_8bit_size** (`int`, defaults to 4096) -- | |
| The minimum number of elements of the parameter tensors for 8-bit optimization.0 | |
| Base SGD optimizer. | |
| **Parameters:** | |
| params (`torch.tensor`) : The input parameters to optimize. | |
| lr (`float`) : The learning rate. | |
| momentum (`float`, defaults to 0) : The momentum value speeds up the optimizer by taking bigger steps. | |
| dampening (`float`, defaults to 0) : The dampening value reduces the momentum of the optimizer. | |
| weight_decay (`float`, defaults to 0.0) : The weight decay value for the optimizer. | |
| nesterov (`bool`, defaults to `False`) : Whether to use Nesterov momentum. | |
| optim_bits (`int`, defaults to 32) : The number of bits of the optimizer state. | |
| args (`object`, defaults to `None`) : An object with additional arguments. | |
| min_8bit_size (`int`, defaults to 4096) : The minimum number of elements of the parameter tensors for 8-bit optimization. | |
| ## SGD8bit[[bitsandbytes.optim.SGD8bit]] | |
| #### bitsandbytes.optim.SGD8bit[[bitsandbytes.optim.SGD8bit]] | |
| [Source](https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/bitsandbytes/optim/sgd.py#L59) | |
| __init__bitsandbytes.optim.SGD8bit.__init__https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/bitsandbytes/optim/sgd.py#L60[{"name": "params", "val": ""}, {"name": "lr", "val": ""}, {"name": "momentum", "val": " = 0"}, {"name": "dampening", "val": " = 0"}, {"name": "weight_decay", "val": " = 0"}, {"name": "nesterov", "val": " = False"}, {"name": "args", "val": " = None"}, {"name": "min_8bit_size", "val": " = 4096"}]- **params** (`torch.tensor`) -- | |
| The input parameters to optimize. | |
| - **lr** (`float`) -- | |
| The learning rate. | |
| - **momentum** (`float`, defaults to 0) -- | |
| The momentum value speeds up the optimizer by taking bigger steps. | |
| - **dampening** (`float`, defaults to 0) -- | |
| The dampening value reduces the momentum of the optimizer. | |
| - **weight_decay** (`float`, defaults to 0.0) -- | |
| The weight decay value for the optimizer. | |
| - **nesterov** (`bool`, defaults to `False`) -- | |
| Whether to use Nesterov momentum. | |
| - **args** (`object`, defaults to `None`) -- | |
| An object with additional arguments. | |
| - **min_8bit_size** (`int`, defaults to 4096) -- | |
| The minimum number of elements of the parameter tensors for 8-bit optimization.0 | |
| 8-bit SGD optimizer. | |
| **Parameters:** | |
| params (`torch.tensor`) : The input parameters to optimize. | |
| lr (`float`) : The learning rate. | |
| momentum (`float`, defaults to 0) : The momentum value speeds up the optimizer by taking bigger steps. | |
| dampening (`float`, defaults to 0) : The dampening value reduces the momentum of the optimizer. | |
| weight_decay (`float`, defaults to 0.0) : The weight decay value for the optimizer. | |
| nesterov (`bool`, defaults to `False`) : Whether to use Nesterov momentum. | |
| args (`object`, defaults to `None`) : An object with additional arguments. | |
| min_8bit_size (`int`, defaults to 4096) : The minimum number of elements of the parameter tensors for 8-bit optimization. | |
| ## SGD32bit[[bitsandbytes.optim.SGD32bit]] | |
| #### bitsandbytes.optim.SGD32bit[[bitsandbytes.optim.SGD32bit]] | |
| [Source](https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/bitsandbytes/optim/sgd.py#L107) | |
| __init__bitsandbytes.optim.SGD32bit.__init__https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/bitsandbytes/optim/sgd.py#L108[{"name": "params", "val": ""}, {"name": "lr", "val": ""}, {"name": "momentum", "val": " = 0"}, {"name": "dampening", "val": " = 0"}, {"name": "weight_decay", "val": " = 0"}, {"name": "nesterov", "val": " = False"}, {"name": "args", "val": " = None"}, {"name": "min_8bit_size", "val": " = 4096"}]- **params** (`torch.tensor`) -- | |
| The input parameters to optimize. | |
| - **lr** (`float`) -- | |
| The learning rate. | |
| - **momentum** (`float`, defaults to 0) -- | |
| The momentum value speeds up the optimizer by taking bigger steps. | |
| - **dampening** (`float`, defaults to 0) -- | |
| The dampening value reduces the momentum of the optimizer. | |
| - **weight_decay** (`float`, defaults to 0.0) -- | |
| The weight decay value for the optimizer. | |
| - **nesterov** (`bool`, defaults to `False`) -- | |
| Whether to use Nesterov momentum. | |
| - **args** (`object`, defaults to `None`) -- | |
| An object with additional arguments. | |
| - **min_8bit_size** (`int`, defaults to 4096) -- | |
| The minimum number of elements of the parameter tensors for 8-bit optimization.0 | |
| 32-bit SGD optimizer. | |
| **Parameters:** | |
| params (`torch.tensor`) : The input parameters to optimize. | |
| lr (`float`) : The learning rate. | |
| momentum (`float`, defaults to 0) : The momentum value speeds up the optimizer by taking bigger steps. | |
| dampening (`float`, defaults to 0) : The dampening value reduces the momentum of the optimizer. | |
| weight_decay (`float`, defaults to 0.0) : The weight decay value for the optimizer. | |
| nesterov (`bool`, defaults to `False`) : Whether to use Nesterov momentum. | |
| args (`object`, defaults to `None`) : An object with additional arguments. | |
| min_8bit_size (`int`, defaults to 4096) : The minimum number of elements of the parameter tensors for 8-bit optimization. | |
Xet Storage Details
- Size:
- 6.83 kB
- Xet hash:
- 31b32568988c1f5c30b686c0c668c111785ac55cd5d7f742339f4fe6d7c75759
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.