|
|
---
|
|
|
license: mit
|
|
|
datasets:
|
|
|
- fka/awesome-chatgpt-prompts
|
|
|
- gopipasala/fka-awesome-chatgpt-prompts
|
|
|
- O1-OPEN/OpenO1-SFT
|
|
|
- PJMixers-Dev/O1-OPEN_OpenO1-SFT-CustomShareGPT
|
|
|
- amphora/QwQ-LongCoT-130K
|
|
|
- amphora/QwQ-LongCoT-130K-2
|
|
|
- HuggingFaceFW/fineweb-2
|
|
|
- DAMO-NLP-SG/multimodal_textbook
|
|
|
- cfahlgren1/react-code-instructions
|
|
|
- agibot-world/AgiBotWorld-Alpha
|
|
|
- HuggingFaceTB/finemath
|
|
|
- HuggingFaceTB/finemath_contamination_report
|
|
|
language:
|
|
|
- en
|
|
|
metrics:
|
|
|
- accuracy
|
|
|
- character
|
|
|
tags:
|
|
|
- not-for-all-audiences
|
|
|
---
|
|
|
|
|
|
## Model Card: UnfilteredAI/Promt-generator |
|
|
|
|
|
### Model Overview |
|
|
The **UnfilteredAI/Promt-generator** is a text generation model designed specifically for creating prompts for text-to-image models. It leverages **PyTorch** and **safetensors** for optimized performance and storage, ensuring that it can be easily deployed and scaled for prompt generation tasks. |
|
|
|
|
|
|
|
|
### Intended Use |
|
|
This model is primarily intended for: |
|
|
- **Prompt generation** for text-to-image models. |
|
|
- Creative AI applications where generating high-quality, diverse image descriptions is critical. |
|
|
- Supporting AI artists and developers working on generative art projects. |
|
|
|
|
|
### How to Use |
|
|
To generate prompts using this model, follow these steps: |
|
|
|
|
|
1. Load the model in your PyTorch environment. |
|
|
2. Input your desired parameters for the prompt generation task. |
|
|
3. The model will return text descriptions based on the input, which can then be used with text-to-image models. |
|
|
|
|
|
**Example Code:** |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained("UnfilteredAI/Promt-generator") |
|
|
model = AutoModelForCausalLM.from_pretrained("UnfilteredAI/Promt-generator") |
|
|
|
|
|
prompt = "a red car" |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
outputs = model.generate(**inputs) |
|
|
generated_prompt = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
|
|
|
|
print(generated_prompt) |
|
|
``` |