Saad4web commited on
Commit
5479abf
·
verified ·
1 Parent(s): 552ae8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -53
README.md CHANGED
@@ -1,58 +1,70 @@
1
  ---
 
2
  library_name: transformers
3
- license: other
4
- base_model: google/functiongemma-270m-it
5
  tags:
6
- - llama-factory
7
- - full
8
- - generated_from_trainer
9
- model-index:
10
- - name: FunctionGemma_Director_V1
11
- results: []
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # FunctionGemma_Director_V1
18
-
19
- This model is a fine-tuned version of [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it) on the game_director dataset.
20
-
21
- ## Model description
22
-
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
-
33
- ## Training procedure
34
-
35
- ### Training hyperparameters
36
-
37
- The following hyperparameters were used during training:
38
- - learning_rate: 5e-05
39
- - train_batch_size: 2
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - gradient_accumulation_steps: 8
43
- - total_train_batch_size: 16
44
- - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
- - lr_scheduler_type: cosine
46
- - lr_scheduler_warmup_ratio: 0.1
47
- - num_epochs: 3.0
48
-
49
- ### Training results
50
-
51
-
52
-
53
- ### Framework versions
54
-
55
- - Transformers 4.57.1
56
- - Pytorch 2.9.0+cu126
57
- - Datasets 4.0.0
58
- - Tokenizers 0.22.1
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: gemma
3
  library_name: transformers
 
 
4
  tags:
5
+ - gemma
6
+ - video-production
7
+ - automation
8
+ - viral-content
9
+ - function-calling
10
+ base_model: google/functiongemma-270m-it
11
+ pipeline_tag: text-generation
12
  ---
13
 
14
+ # 🎬 FunctionGemma-Director-V1
15
+
16
+ **FunctionGemma-Director-V1** is a specialized lightweight AI model (270M parameters) designed to automate the production of viral short-form gaming videos (TikTok/Shorts/Reels).
17
+
18
+ It acts as a **"Creative Director"**, converting a simple video title into a structured **JSON editing plan**, executing a "Trojan Horse" monetization strategy by seamlessly integrating CPA offers into content.
19
+
20
+ ## 🚀 Key Features
21
+ * **Size:** ~540MB (Runs smoothly on free Colab/CPU).
22
+ * **Strategy:** Automatically places "High Retention Hooks" and injects "CPA Offers" at the most effective timestamps.
23
+ * **Output:** Strict JSON format compatible with Python video automation engines (MoviePy).
24
+
25
+ ## 💻 How to Use
26
+
27
+ ```python
28
+ import torch
29
+ from transformers import AutoTokenizer, AutoModelForCausalLM
30
+ import json
31
+
32
+ # 1. Load the Model
33
+ model_id = "Saad4web/FunctionGemma-Director-V1"
34
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
35
+ model = AutoModelForCausalLM.from_pretrained(
36
+ model_id,
37
+ device_map="auto",
38
+ torch_dtype=torch.float16 # Optimized for low memory
39
+ )
40
+
41
+ # 2. Define the Tools (The Model's Vocabulary)
42
+ tools_schema = [
43
+ {"name": "add_video_clip", "parameters": {"file_path": "string", "duration": "number"}},
44
+ {"name": "add_text_overlay", "parameters": {"text": "string", "color": "string"}},
45
+ ]
46
+
47
+ # 3. Create the Prompt
48
+ video_title = "TOP 3|SCARIEST HORROR GAMES|*DONT WATCH ALONE*"
49
+ system_msg = f"You are a specialized video editor AI. Available tools: {json.dumps(tools_schema)}"
50
+ messages = [{"role": "user", "content": system_msg + f"\n\nCreate a viral video plan for: {video_title}"}]
51
+
52
+ # 4. Generate the Plan
53
+ input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
54
+
55
+ outputs = model.generate(
56
+ input_ids,
57
+ max_new_tokens=512,
58
+ do_sample=True,
59
+ temperature=0.1
60
+ )
61
+
62
+ # 5. Get the JSON
63
+ plan = tokenizer.decode(outputs[0][len(input_ids[0]):], skip_special_tokens=True)
64
+ print(plan)
65
+
66
+ #🛠️ Training Details
67
+ Architecture: Fine-tuned google/functiongemma-270m-it.
68
+ Dataset: Synthetic dataset generated via Knowledge Distillation (Teacher: GPT-4o/Gemini 2.0).
69
+ Method: Full Fine-Tuning using LLaMA Factory.
70
+ Created by [Saad4web]