--- language: en license: mit # Adjust based on original model tags: - pruning - sparse - atropos - code-generation --- # gpt2-xl-structured_pruning This is a pruned version of [gpt2-xl](https://huggingface.co/gpt2-xl) generated as part of the Atropos pruning exercise. ## Pruning Details - **Strategy:** structured_pruning - **Method:** PyTorch magnitude-based unstructured pruning - **Generated by:** Atropos pruning exercise scripts ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("gpt2-xl-structured_pruning") tokenizer = AutoTokenizer.from_pretrained("gpt2-xl-structured_pruning") ``` ## About Atropos [Atropos](https://github.com/its-not-rocket-science/atropos) estimates ROI for pruning and quantization optimizations in LLM deployments. This model was pruned to validate Atropos projections against actual results. ## Citation If you use this model, please cite the original model and Atropos: ```bibtex @software{atropos, title = {Atropos: ROI Estimation for LLM Pruning}, year = {2026}, } ```