File size: 2,806 Bytes
8888bc6 afd1ed7 8888bc6 6fd6cab 8888bc6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
---
license: apache-2.0
tags:
- meta-learning
- lora
- checkpoints
- few-shot-learning
- llm
- qwen
library_name: peft
datasets:
- ARC
- HellaSwag
- BoolQ
- PIQA
- WinoGrande
- SocialIQA
---
# DeGAML-LLM Checkpoints
This repository contains pre-trained checkpoints for the generalization module of our proposed **DeGAML-LLM** framework - a novel meta-learning approach that decouples generalization and adaptation for Large Language Models.
## π Links
- **Project Page**: [https://nitinvetcha.github.io/DeGAML-LLM/](https://nitinvetcha.github.io/DeGAML-LLM/)
- **GitHub Repository**: [https://github.com/nitinvetcha/DeGAML-LLM](https://github.com/nitinvetcha/DeGAML-LLM)
- **HuggingFace Profile**: [https://huggingface.co/Nitin2004](https://huggingface.co/Nitin2004)
## π¦ Available Checkpoints
All checkpoints are trained on **Qwen2.5-0.5B-Instruct** using LoRA adapters optimized with the DeGAML-LLM framework:
| Checkpoint Name | Dataset | Size |
|----------------|---------|------|
| `qwen0.5lora__ARC-c.pth` | ARC-Challenge | ~4.45 GB |
| `qwen0.5lora__ARC-e.pth` | ARC-Easy | ~4.45 GB |
| `qwen0.5lora__BoolQ.pth` | BoolQ | ~4.45 GB |
| `qwen0.5lora__HellaSwag.pth` | HellaSwag | ~4.45 GB |
| `qwen0.5lora__PIQA.pth` | PIQA | ~4.45 GB |
| `qwen0.5lora__SocialIQA.pth` | SocialIQA | ~4.45 GB |
| `qwen0.5lora__WinoGrande.pth` | WinoGrande | ~4.45 GB |
## π Usage
### Download
```python
from huggingface_hub import hf_hub_download
# Download a specific checkpoint
checkpoint_path = hf_hub_download(
repo_id="Nitin2004/DeGAML-LLM-checkpoints",
filename="qwen0.5lora__ARC-c.pth"
)
```
### Load with PyTorch
```python
import torch
# Load the checkpoint
checkpoint = torch.load(checkpoint_path)
print(checkpoint.keys())
```
### Use with DeGAML-LLM
Refer to the [DeGAML-LLM repository](https://github.com/nitinvetcha/DeGAML-LLM) for detailed usage instructions on how to integrate these checkpoints with the framework.
## π Performance
These checkpoints achieve state-of-the-art results on common-sense reasoning tasks when used with the DeGAML-LLM adaptation framework. See the [project page](https://nitinvetcha.github.io/DeGAML-LLM/) for complete benchmark results.
## π Citation
If you use these checkpoints in your research, please cite:
```bibtex
@article{degaml-llm2025,
title={Decoupling Generalization and Adaptation in Meta-Learning for Large Language Models},
author={Vetcha, Nitin and Xu, Binqian and Liu, Dianbo},
year={2026}
}
```
## π§ Contact
For questions or issues, please:
- Open an issue on [GitHub](https://github.com/nitinvetcha/DeGAML-LLM/issues)
- Contact: nitinvetcha@gmail.com
## π License
Apache License 2.0 - See [LICENSE](https://github.com/nitinvetcha/DeGAML-LLM/blob/main/LICENSE) for details.
|