Instructions to use Non-Residual-Prompting/GPT2-Large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Non-Residual-Prompting/GPT2-Large with Transformers:
# Load model directly from transformers import TFAutoModel model = TFAutoModel.from_pretrained("Non-Residual-Prompting/GPT2-Large", dtype="auto") - Notebooks
- Google Colab
- Kaggle
This is the GPT2-Large-initialized prompt model used in the paper Fine-Grained Controllable Text Generation Using Non-Residual Prompting. It is loaded automatically in the GitHub repository below, if you want to try it out!
Paper: https://aclanthology.org/2022.acl-long.471
Official GitHub: https://github.com/FreddeFrallan/Non-Residual-Prompting
- Downloads last month
- 10
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support