metadata
license: apache-2.0
OPT-1.3B Fine-tuned with PyTorch FSDP
This model is a fine-tuned version of facebook/opt-1.3b, fine-tuned using PyTorch's Fully Sharded Data Parallel (FSDP) for efficient multi-GPU training on consumer hardware.
This model was fine-tuned using the arxiv-abstract-dataset on 2 × T4 16GB GPUs.
For detailed implementation, training procedures, and reproducibility instructions, please check out the project repository.