GGUF
munish0838 commited on
Commit
28dc4f0
·
verified ·
1 Parent(s): 20d8f5b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: mit
5
+
6
+ ---
7
+
8
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
9
+
10
+
11
+ # QuantFactory/Promt-generator-GGUF
12
+ This is quantized version of [UnfilteredAI/Promt-generator](https://huggingface.co/UnfilteredAI/Promt-generator) created using llama.cpp
13
+
14
+ # Original Model Card
15
+
16
+
17
+ ## Model Card: UnfilteredAI/Promt-generator
18
+
19
+ ### Model Overview
20
+ The **UnfilteredAI/Promt-generator** is a text generation model designed specifically for creating prompts for text-to-image models. It leverages **PyTorch** and **safetensors** for optimized performance and storage, ensuring that it can be easily deployed and scaled for prompt generation tasks.
21
+
22
+
23
+ ### Intended Use
24
+ This model is primarily intended for:
25
+ - **Prompt generation** for text-to-image models.
26
+ - Creative AI applications where generating high-quality, diverse image descriptions is critical.
27
+ - Supporting AI artists and developers working on generative art projects.
28
+
29
+ ### How to Use
30
+ To generate prompts using this model, follow these steps:
31
+
32
+ 1. Load the model in your PyTorch environment.
33
+ 2. Input your desired parameters for the prompt generation task.
34
+ 3. The model will return text descriptions based on the input, which can then be used with text-to-image models.
35
+
36
+ **Example Code:**
37
+
38
+ ```python
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained("UnfilteredAI/Promt-generator")
42
+ model = AutoModelForCausalLM.from_pretrained("UnfilteredAI/Promt-generator")
43
+
44
+ prompt = "a red car"
45
+ inputs = tokenizer(prompt, return_tensors="pt")
46
+ outputs = model.generate(**inputs)
47
+ generated_prompt = tokenizer.decode(outputs[0], skip_special_tokens=True)
48
+
49
+ print(generated_prompt)
50
+ ```