| | --- |
| | license: other |
| | metrics: |
| | - character |
| | library_name: transformers |
| | pipeline_tag: conversational |
| | tags: |
| | - art |
| | language: |
| | - en |
| | --- |
| | # Model Card for Model ID |
| |
|
| | <!-- Provide a quick summary of what the model is/does. --> |
| |
|
| | Its the Portal Space Core |
| |
|
| | ## Model Details |
| |
|
| | ### Model Description |
| |
|
| | <!-- Provide a longer summary of what this model is. --> |
| |
|
| |
|
| |
|
| | - **Developed by:** [ELMatero] |
| | - **Shared by [optional]:** [ELMatero] |
| | - **Model type:** [conversational] |
| | - **Language(s) (NLP):** [English] |
| | - **License:** [Other] |
| | - **Finetuned from model [optional]:** [DialoGPT-Small] |
| |
|
| | ## Uses |
| |
|
| | The Hosted Inference API Breaks it, I haven't figured out a way to limit its responses so its hard capped at 512 in the Generation_Config.json file. Just change that back to 1024 and you are good |
| | so if someone knows please send a pull request or a edit or something! |
| | |
| | ### Direct Use |
| | |
| | Just use it like you would usually for DialoGPT-small. |
| | |
| | |
| | |