YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.


license: apache-2.0
base_model: facebook/bart-base
tags:
- summarization
- bart
- fine-tuned
- lora_r32
- text-generation
language:
- en
datasets:
- cnn_dailymail
- xsum
pipeline_tag: summarization
---

# BART Fine-tuned for Summarization (lora_r32)

## Model Description

This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for summarization tasks using the **lora_r32** fine-tuning strategy.

## Training Details

### Training Data
- CNN/DailyMail dataset
- XSum dataset
- Combined training approach with data balancing

### Training Configuration

- **Learning Rate**: 5e-05
- **Batch Size**: 8
- **Epochs**: 3
- **Optimizer**: AdamW
- **Scheduler**: Cosine with warmup
- **Mixed Precision**: True
- **Gradient Checkpointing**: True

### Strategy Details
- **Total Parameters**: 145,908,480
- **Trainable Parameters**: 6,488,064
- **Trainable Ratio**: 4.45%
- **Model Size**: 556.6 MB
- **Training Time**: 15.9 minutes

## Usage

```python
from transformers import BartTokenizer, BartForConditionalGeneration

# Load model and tokenizer
tokenizer = BartTokenizer.from_pretrained("alansary/lora_r32-bart-summarization")
model = BartForConditionalGeneration.from_pretrained("alansary/lora_r32-bart-summarization")

# Example usage
text = "Your long text to summarize here..."
inputs = tokenizer.encode("summarize: " + text, return_tensors="pt", max_length=512, truncation=True)

# Generate summary
with torch.no_grad():
    summary_ids = model.generate(
        inputs, 
        max_length=128, 
        num_beams=4, 
        early_stopping=True,
        no_repeat_ngram_size=2
    )

summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary)
```

## Model Performance

Performance metrics will be updated after evaluation completion.

## Limitations and Bias

This model inherits the limitations and biases present in the base BART model and training datasets. Users should be aware of potential biases in summarization outputs.

## Citation

If you use this model, please cite:

```bibtex
@misc{bart-finetuned-summarization-lora_r32,
author = {Your Name},
title = {BART Fine-tuned for Summarization (lora_r32)},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/alansary/lora_r32-bart-summarization}}
}
```
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support