RichardErkhov commited on
Commit
889fe75
·
verified ·
1 Parent(s): 3ee378c

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Promt-generator - bnb 8bits
11
+ - Model creator: https://huggingface.co/UnfilteredAI/
12
+ - Original model: https://huggingface.co/UnfilteredAI/Promt-generator/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: mit
20
+ ---
21
+
22
+ ## Model Card: UnfilteredAI/Promt-generator
23
+
24
+ ### Model Overview
25
+ The **UnfilteredAI/Promt-generator** is a text generation model designed specifically for creating prompts for text-to-image models. It leverages **PyTorch** and **safetensors** for optimized performance and storage, ensuring that it can be easily deployed and scaled for prompt generation tasks.
26
+
27
+
28
+ ### Intended Use
29
+ This model is primarily intended for:
30
+ - **Prompt generation** for text-to-image models.
31
+ - Creative AI applications where generating high-quality, diverse image descriptions is critical.
32
+ - Supporting AI artists and developers working on generative art projects.
33
+
34
+ ### How to Use
35
+ To generate prompts using this model, follow these steps:
36
+
37
+ 1. Load the model in your PyTorch environment.
38
+ 2. Input your desired parameters for the prompt generation task.
39
+ 3. The model will return text descriptions based on the input, which can then be used with text-to-image models.
40
+
41
+ **Example Code:**
42
+
43
+ ```python
44
+ from transformers import AutoModelForCausalLM, AutoTokenizer
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained("UnfilteredAI/Promt-generator")
47
+ model = AutoModelForCausalLM.from_pretrained("UnfilteredAI/Promt-generator")
48
+
49
+ prompt = "a red car"
50
+ inputs = tokenizer(prompt, return_tensors="pt")
51
+ outputs = model.generate(**inputs)
52
+ generated_prompt = tokenizer.decode(outputs[0], skip_special_tokens=True)
53
+
54
+ print(generated_prompt)
55
+ ```
56
+