ASO-Strategist
A fine-tuned Llama 3.2 3B model optimized for App Store Optimization (ASO) tasks, Given an app description, it generates:
- ๐ฏ Target Keywords - High-value search terms (5-8 keywords)
- ๐ Optimized Subtitle - Compelling 30-char App Store subtitle
- ๐ Search Visibility Score - 1-100 rating based on keyword potential
Model Details
| Attribute | Value |
|---|---|
| Base Model | Llama-3.2-3B-Instruct-4bit |
| Quantization | 4-bit |
| Fine-tuning | LoRA (16 layers, rank 8, alpha 16) |
| Framework | MLX |
| Training Data | 1,000 synthetic ASO samples |
| Final Val Loss | 0.320 |
Output Format
{
"target_keywords": [
"meditation app",
"sleep stories",
"stress relief",
"mindfulness"
],
"optimized_subtitle": "Calm your mind, sleep better",
"search_visibility_score": 85,
"reasoning": "Keywords target high-volume wellness searches..."
}
Quick Start
Installation
pip install mlx-lm
Inference
from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler
import json
# Load model
model, tokenizer = load("fahidnasir/ASO-Strategist")
sampler = make_sampler(temp=0.3, top_p=0.9)
# System prompt
SYSTEM = """You are an expert App Store Optimization (ASO) specialist. Given an app description, provide:
1. Target Keywords: 5-8 high-value keywords/phrases for ASO
2. Optimized Subtitle: A compelling 30-character max subtitle
3. Search Visibility Score: 1-100 based on keyword potential
Respond in JSON format."""
# Build prompt (Llama 3.2 format)
prompt = f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{SYSTEM}<|eot_id|><|start_header_id|>user<|end_header_id|>
Analyze this app: A fitness tracking app with AI-powered workout recommendations and progress analytics<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
# Generate
response = generate(model, tokenizer, prompt=prompt, max_tokens=512, sampler=sampler)
# Parse JSON
result = json.loads(response[response.find('{'):response.rfind('}')+1])
print(json.dumps(result, indent=2))
Example Output
Input:
"A meditation app with guided sleep stories, breathing exercises, and daily mindfulness reminders for stress relief"
Output:
{
"target_keywords": [
"meditation app",
"guided sleep stories",
"stress relief",
"mindfulness reminders",
"breathing exercises",
"relaxation",
"sleep stories",
"mindfulness meditation"
],
"optimized_subtitle": "Sleep better, stress less",
"search_visibility_score": 85,
"reasoning": "High-volume keywords in wellness/meditation category with good long-tail coverage"
}
Training Details
- Iterations: 500
- Batch Size: 4
- Learning Rate: 1e-4
- Max Sequence Length: 2048
- Hardware: Apple Silicon 64GB unified memory
Loss Curve
| Iteration | Train Loss | Val Loss |
|---|---|---|
| 50 | 0.396 | 0.376 |
| 100 | 0.341 | 0.355 |
| 200 | 0.317 | 0.334 |
| 300 | 0.290 | 0.325 |
| 500 | 0.260 | 0.320 |
Hardware Requirements
- Minimum: Apple Silicon Mac with 8GB unified memory
- Recommended: M1 Pro/Max/Ultra or M2/M3/M4 with 16GB+
Limitations
- Trained on synthetic data; real-world ASO may vary
- English language only
- Subtitle recommendations may occasionally exceed 30 characters
- Keywords reflect general patterns, not real-time App Store data
Use Cases
- ๐ฑ App Developers: Quick keyword research for new app listings
- ๐ Marketing Teams: Baseline ASO analysis and subtitle ideation
- ๐ ASO Professionals: Automated first-pass keyword suggestions
- ๐ Learning: Understanding ASO principles and keyword selection
License
This model inherits the Llama 3.2 Community License.
Built with Llama.
Citation
@misc{aso-strategist-2025,
author = {Fahid Nasir},
title = {ASO-Strategist: Fine-tuned Model for App Store Optimization},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/fahidnasir/ASO-Strategist}
}
Acknowledgments
- Meta AI for Llama 3.2
- MLX Team for the Apple Silicon ML framework
- Hugging Face for model hosting
- Downloads last month
- 37
Model size
0.5B params
Tensor type
F16
ยท
U32
ยท
Hardware compatibility
Log In
to add your hardware
4-bit
Model tree for fahidnasir/ASO-Strategist
Base model
meta-llama/Llama-3.2-3B-Instruct
Finetuned
mlx-community/Llama-3.2-3B-Instruct
Quantized
mlx-community/Llama-3.2-3B-Instruct-4bit