Instructions to use Non-Residual-Prompting/GPT2-Large-Post-Transformation with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Non-Residual-Prompting/GPT2-Large-Post-Transformation with Transformers:
# Load model directly from transformers import TFAutoModel model = TFAutoModel.from_pretrained("Non-Residual-Prompting/GPT2-Large-Post-Transformation", dtype="auto") - Notebooks
- Google Colab
- Kaggle
This "model" holds the weights for the positional invariance transformation used in the paper Fine-Grained Controllable Text Generation Using Non-Residual Prompting. It is loaded automatically in the GitHub repository below, if you want to try it out!
Paper: https://aclanthology.org/2022.acl-long.471
Official GitHub: https://github.com/FreddeFrallan/Non-Residual-Prompting
- Downloads last month
- 6
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support