File size: 4,246 Bytes
f0ff2ce
 
e6a9a75
 
 
 
 
 
 
f0ff2ce
 
e6a9a75
f0ff2ce
e6a9a75
f0ff2ce
 
 
 
 
 
 
e6a9a75
f0ff2ce
e6a9a75
 
 
 
 
 
 
f0ff2ce
 
 
 
 
 
 
 
e6a9a75
f0ff2ce
 
 
 
e6a9a75
f0ff2ce
 
 
 
e6a9a75
f0ff2ce
 
 
 
e6a9a75
f0ff2ce
 
 
e6a9a75
f0ff2ce
 
 
e6a9a75
f0ff2ce
e6a9a75
 
f0ff2ce
e6a9a75
 
f0ff2ce
e6a9a75
 
 
 
f0ff2ce
e6a9a75
 
f0ff2ce
e6a9a75
f0ff2ce
e6a9a75
f0ff2ce
e6a9a75
f0ff2ce
e6a9a75
f0ff2ce
e6a9a75
 
 
 
f0ff2ce
 
 
e6a9a75
 
 
 
 
 
 
f0ff2ce
 
 
 
 
e6a9a75
 
f0ff2ce
 
 
 
e6a9a75
 
 
f0ff2ce
 
 
e6a9a75
f0ff2ce
 
 
 
 
e6a9a75
 
f0ff2ce
 
 
e6a9a75
 
 
 
 
 
f0ff2ce
e6a9a75
f0ff2ce
e6a9a75
f0ff2ce
e6a9a75
f0ff2ce
e6a9a75
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
---
library_name: transformers
tags:
  - text-generation
  - ad-generation
  - marketing
  - transformers
  - pytorch
  - beam-search
---

# # Model Card for Falcon-RW-1B Fine-Tuned Model

This model is a fine-tuned version of `tiiuae/falcon-rw-1b` trained on an advertising-related dataset to generate ad text based on prompts.



## Model Details

### Model Description

This model is a fine-tuned version of the Falcon-RW-1B model, specifically adapted for generating advertising content. The fine-tuning process utilized a dataset containing ad-related text, formatted as structured prompt-response pairs.

- **Developed by:** Adnane Touiyate
- **Funded by [optional]:** [Adnane10](https://huggingface.co/Adnane10)
- **Shared by [optional]:** [Adnane10](https://huggingface.co/Adnane10)
- **Model type:** Falcon-RW-1B (Causal Language Model)
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model [optional]:** `tiiuae/falcon-rw-1b`


## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

This model can be used for generating advertising content based on structured prompts. It is useful for marketers and advertisers who need AI-generated ad copies.


### Downstream Use [optional]

The model can be further fine-tuned for specific ad categories or integrated into larger marketing automation workflows.


### Out-of-Scope Use

This model is not intended for generating non-advertising-related content, and its performance may be suboptimal in general text generation tasks beyond its training scope.


## Bias, Risks, and Limitations

Since the model has been fine-tuned on advertising content, it may inherit biases present in the dataset. Users should be cautious when generating ads to ensure they meet ethical and regulatory standards.

### Recommendations

Users should validate the generated content for appropriateness, compliance, and factual accuracy before using it in real-world applications.

## How to Get Started with the Model

Use the code below to load and use the model:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-rw-1b")
model = AutoModelForCausalLM.from_pretrained("path_to_finetuned_model")

def generate_ad(prompt):
    inputs = tokenizer(prompt, return_tensors="pt").to('cuda')
    outputs = model.generate(**inputs, max_length=100)
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

print(generate_ad("Introducing our latest product: "))
```

## Training Details

### Training Data

The model was trained on `fixed_ads_list.json`, a dataset containing structured ad-related prompts and responses.

### Training Procedure

- **Preprocessing:** Tokenized text in the format `### Prompt: [User Input] ### Response: [Ad Text]`
- **Quantization:** Used 4-bit quantization (NF4) with `bitsandbytes` for efficiency.
- **Fine-tuning method:** LoRA (Low-Rank Adaptation) for efficient adaptation.
- **Hardware:** GPU-accelerated training.

#### Training Hyperparameters

- **Learning Rate:** 1e-4
- **Batch Size:** 2 (per device)
- **Gradient Accumulation:** 8 steps
- **Epochs:** 6
- **Precision:** BF16
- **Evaluation Strategy:** Epoch-based
- **Early Stopping:** Enabled after 2 epochs without improvement

## Evaluation

### Testing Data, Factors & Metrics

- **Metrics:** BLEU and ROUGE scores
- **Results:** Sample evaluation showed:


## Environmental Impact

- **Hardware Type:** NVIDIA P100 GPU
- **Hours used:** ~54 minutes
- **Cloud Provider:** Kaggle

### Model Architecture and Objective

The Falcon-RW-1B model is a causal language model optimized for text generation.

### Compute Infrastructure

#### Hardware

- GPUs (NVIDIA P100)
- Used `bitsandbytes` for memory-efficient training

#### Software

- `transformers`
- `datasets`
- `peft`
- `torch`
- `accelerate`
- `bitsandbytes`

## Model Card Authors

**Adnane Touiyate** ([@Adnane10](https://huggingface.co/Adnane10))

## Contact

For questions or collaborations, reach out via [LinkedIn](https://www.linkedin.com/in/adnanetouiyate/) or email: [adnanetouiayte11@gmail.com](mailto:adnanetouiayte11@gmail.com)