Add datasets dataset
Browse files- .gitattributes +9 -0
- datasets/README.md +134 -0
- datasets/agent_domain_knowledge.jsonl +0 -0
- datasets/analyze_all_datasets.py +217 -0
- datasets/automation_domain_knowledge.jsonl +3 -0
- datasets/code_context_dataset.jsonl +0 -0
- datasets/convert_parquet.py +53 -0
- datasets/dataset_001.json +0 -0
- datasets/dataset_002.json +0 -0
- datasets/dataset_003.json +3 -0
- datasets/deduplicate_datasets.py +224 -0
- datasets/n8n_github_workflows.jsonl +3 -0
- datasets/n8n_master.jsonl +3 -0
- datasets/n8n_toolkit_sharegpt.jsonl +3 -0
- datasets/reddit_solutions.jsonl +44 -0
- datasets/training/01_conversational_sft.jsonl +3 -0
- datasets/training/02_reasoning_with_thinking.jsonl +3 -0
- datasets/training/03_latest_features.jsonl +3 -0
- datasets/training/04_advanced_workflows.json +3 -0
- datasets/training/README.md +189 -0
- datasets/validate_datasets.py +155 -0
- datasets/youtube_metadata.jsonl +18 -0
.gitattributes
CHANGED
|
@@ -201,3 +201,12 @@ docs-dataset/screenshots/uploaded_image_1766792725147.png filter=lfs diff=lfs me
|
|
| 201 |
docs-dataset/screenshots/wf_sample.png filter=lfs diff=lfs merge=lfs -text
|
| 202 |
docs-dataset/tech_stack_full.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 203 |
docs-dataset/vendor_resources.jsonl filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 201 |
docs-dataset/screenshots/wf_sample.png filter=lfs diff=lfs merge=lfs -text
|
| 202 |
docs-dataset/tech_stack_full.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 203 |
docs-dataset/vendor_resources.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 204 |
+
datasets/automation_domain_knowledge.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 205 |
+
datasets/dataset_003.json filter=lfs diff=lfs merge=lfs -text
|
| 206 |
+
datasets/n8n_github_workflows.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 207 |
+
datasets/n8n_master.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 208 |
+
datasets/n8n_toolkit_sharegpt.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 209 |
+
datasets/training/01_conversational_sft.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 210 |
+
datasets/training/02_reasoning_with_thinking.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 211 |
+
datasets/training/03_latest_features.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 212 |
+
datasets/training/04_advanced_workflows.json filter=lfs diff=lfs merge=lfs -text
|
datasets/README.md
ADDED
|
@@ -0,0 +1,134 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# n8n Workflow Training Datasets
|
| 2 |
+
|
| 3 |
+
This directory contains training datasets for fine-tuning Large Language Models (LLMs) to generate n8n workflows from natural language descriptions.
|
| 4 |
+
|
| 5 |
+
## Dataset Format
|
| 6 |
+
|
| 7 |
+
Each dataset file is a JSON array containing training examples in a conversational format:
|
| 8 |
+
|
| 9 |
+
```json
|
| 10 |
+
[
|
| 11 |
+
{
|
| 12 |
+
"messages": [
|
| 13 |
+
{
|
| 14 |
+
"role": "user",
|
| 15 |
+
"content": "When a new email arrives in Gmail, save the attachment to Google Drive."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"role": "assistant",
|
| 19 |
+
"content": "{\"name\": \"Email to Drive\", \"nodes\": [...], \"connections\": {...}, \"active\": false}"
|
| 20 |
+
}
|
| 21 |
+
]
|
| 22 |
+
}
|
| 23 |
+
]
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
### Structure
|
| 27 |
+
|
| 28 |
+
- **`role: "user"`** - Natural language description of the workflow to create
|
| 29 |
+
- **`role: "assistant"`** - JSON representation of the complete n8n workflow
|
| 30 |
+
|
| 31 |
+
The assistant's response contains a valid n8n workflow definition with:
|
| 32 |
+
- `name`: Workflow name
|
| 33 |
+
- `nodes`: Array of node definitions (triggers, actions, transformations)
|
| 34 |
+
- `connections`: Object defining how nodes are connected
|
| 35 |
+
- `active`: Boolean indicating if workflow is active (usually `false` for templates)
|
| 36 |
+
|
| 37 |
+
## Dataset Files
|
| 38 |
+
|
| 39 |
+
### dataset_001.json
|
| 40 |
+
- **Size**: 2.5 MB
|
| 41 |
+
- **Examples**: 3,061 workflow examples
|
| 42 |
+
- **Status**: ✅ Valid JSON
|
| 43 |
+
- **Focus**: Common workflow patterns (Gmail, Slack, Google Sheets, Trello, Airtable, Notion, etc.)
|
| 44 |
+
|
| 45 |
+
### dataset_002.json
|
| 46 |
+
- **Size**: 4.9 MB
|
| 47 |
+
- **Status**: ⚠️ JSON parsing errors detected
|
| 48 |
+
- **Note**: May require cleaning before use
|
| 49 |
+
|
| 50 |
+
### dataset_003.json
|
| 51 |
+
- **Size**: 14.0 MB
|
| 52 |
+
- **Status**: ⚠️ JSON parsing errors detected
|
| 53 |
+
- **Note**: May require cleaning before use
|
| 54 |
+
|
| 55 |
+
## Common Workflow Patterns
|
| 56 |
+
|
| 57 |
+
Based on dataset_001.json analysis, the most common patterns include:
|
| 58 |
+
|
| 59 |
+
1. **Email Automation**
|
| 60 |
+
- Gmail → Google Drive (save attachments)
|
| 61 |
+
- Gmail → Slack (notifications)
|
| 62 |
+
- Gmail → Airtable (create records)
|
| 63 |
+
|
| 64 |
+
2. **Spreadsheet Integration**
|
| 65 |
+
- Google Sheets → Slack (new row notifications)
|
| 66 |
+
- Google Sheets → Gmail (alerts)
|
| 67 |
+
- Airtable → Google Sheets (sync data)
|
| 68 |
+
|
| 69 |
+
3. **Project Management**
|
| 70 |
+
- Trello → Slack (card updates)
|
| 71 |
+
- Trello → Google Calendar (deadline tracking)
|
| 72 |
+
- GitHub → Trello (issue tracking)
|
| 73 |
+
|
| 74 |
+
4. **Notification Workflows**
|
| 75 |
+
- Slack reactions → Airtable (logging)
|
| 76 |
+
- Calendar events → Email reminders
|
| 77 |
+
- Notion updates → Slack posts
|
| 78 |
+
|
| 79 |
+
## Usage for LLM Training
|
| 80 |
+
|
| 81 |
+
### Fine-tuning Format
|
| 82 |
+
|
| 83 |
+
These datasets are compatible with OpenAI's fine-tuning format and similar training pipelines. Each example teaches the model to:
|
| 84 |
+
|
| 85 |
+
1. Parse natural language workflow requests
|
| 86 |
+
2. Identify required n8n nodes
|
| 87 |
+
3. Configure node parameters
|
| 88 |
+
4. Establish proper connections between nodes
|
| 89 |
+
|
| 90 |
+
### Recommended Preprocessing
|
| 91 |
+
|
| 92 |
+
Before using these datasets:
|
| 93 |
+
|
| 94 |
+
1. **Validate JSON**: Verify all files parse correctly
|
| 95 |
+
2. **Deduplicate**: Remove duplicate examples (some duplicates exist)
|
| 96 |
+
3. **Filter**: Optionally filter by specific integrations or complexity
|
| 97 |
+
4. **Balance**: Ensure diverse node types are represented
|
| 98 |
+
|
| 99 |
+
### Example Use Cases
|
| 100 |
+
|
| 101 |
+
- Fine-tune GPT models to generate n8n workflows
|
| 102 |
+
- Train models to suggest workflow improvements
|
| 103 |
+
- Create workflow completion assistants
|
| 104 |
+
- Build n8n-specific code generation tools
|
| 105 |
+
|
| 106 |
+
## Integration with n8n-mcp
|
| 107 |
+
|
| 108 |
+
This repository complements the [n8n-mcp](https://github.com/yourusername/n8n-mcp) server by providing:
|
| 109 |
+
|
| 110 |
+
- **Static training data** for model fine-tuning
|
| 111 |
+
- **Example workflows** for reference
|
| 112 |
+
- **Pattern library** for common automations
|
| 113 |
+
|
| 114 |
+
While n8n-mcp provides real-time workflow execution and API access, these datasets enable LLMs to learn n8n's workflow generation patterns.
|
| 115 |
+
|
| 116 |
+
## Contributing
|
| 117 |
+
|
| 118 |
+
When adding new examples:
|
| 119 |
+
|
| 120 |
+
1. Follow the existing JSON structure
|
| 121 |
+
2. Ensure workflow JSON is valid n8n format
|
| 122 |
+
3. Use descriptive, natural language in user messages
|
| 123 |
+
4. Test workflows before adding to datasets
|
| 124 |
+
5. Avoid duplicates
|
| 125 |
+
|
| 126 |
+
## Known Issues
|
| 127 |
+
|
| 128 |
+
- Duplicate entries exist in dataset_001.json (minimal impact on training)
|
| 129 |
+
- dataset_002.json and dataset_003.json have JSON formatting errors
|
| 130 |
+
- Some placeholder values (e.g., `{{SHEET_ID}}`, `{{API_KEY}}`) are included - these are intentional for template-style workflows
|
| 131 |
+
|
| 132 |
+
## Tools
|
| 133 |
+
|
| 134 |
+
See `/scripts/analyze_datasets.py` for dataset analysis and statistics tools.
|
datasets/agent_domain_knowledge.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
datasets/analyze_all_datasets.py
ADDED
|
@@ -0,0 +1,217 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Comprehensive Dataset Analysis Tool
|
| 4 |
+
|
| 5 |
+
Analyzes all n8n datasets including JSONL and Parquet formats.
|
| 6 |
+
Provides detailed statistics, validation, and duplicate detection.
|
| 7 |
+
"""
|
| 8 |
+
|
| 9 |
+
import json
|
| 10 |
+
import os
|
| 11 |
+
from pathlib import Path
|
| 12 |
+
from typing import Dict, List, Any
|
| 13 |
+
from collections import defaultdict
|
| 14 |
+
|
| 15 |
+
try:
|
| 16 |
+
import pandas as pd
|
| 17 |
+
PANDAS_AVAILABLE = True
|
| 18 |
+
except ImportError:
|
| 19 |
+
PANDAS_AVAILABLE = False
|
| 20 |
+
print("⚠️ pandas not available - Parquet analysis will be skipped")
|
| 21 |
+
print(" Install with: pip install pandas pyarrow")
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
def analyze_jsonl(filepath: Path) -> Dict[str, Any]:
|
| 25 |
+
"""Analyze JSONL format dataset."""
|
| 26 |
+
print(f"\n📊 Analyzing: {filepath.name}")
|
| 27 |
+
print(f" Size: {filepath.stat().st_size / (1024*1024):.2f} MB")
|
| 28 |
+
|
| 29 |
+
examples = []
|
| 30 |
+
errors = []
|
| 31 |
+
|
| 32 |
+
with open(filepath, 'r', encoding='utf-8') as f:
|
| 33 |
+
for line_num, line in enumerate(f, 1):
|
| 34 |
+
line = line.strip()
|
| 35 |
+
if not line:
|
| 36 |
+
continue
|
| 37 |
+
try:
|
| 38 |
+
examples.append(json.loads(line))
|
| 39 |
+
except json.JSONDecodeError as e:
|
| 40 |
+
errors.append(f"Line {line_num}: {e}")
|
| 41 |
+
if len(errors) < 5: # Only show first 5 errors
|
| 42 |
+
print(f" ⚠️ Error on line {line_num}: {e}")
|
| 43 |
+
|
| 44 |
+
# Analyze structure
|
| 45 |
+
fields = set()
|
| 46 |
+
if examples:
|
| 47 |
+
for ex in examples[:100]: # Sample first 100
|
| 48 |
+
fields.update(ex.keys())
|
| 49 |
+
|
| 50 |
+
print(f" ✅ Valid: {len(examples):,} examples")
|
| 51 |
+
print(f" 📝 Fields: {', '.join(sorted(fields))}")
|
| 52 |
+
|
| 53 |
+
return {
|
| 54 |
+
'filename': filepath.name,
|
| 55 |
+
'format': 'JSONL',
|
| 56 |
+
'size_mb': filepath.stat().st_size / (1024*1024),
|
| 57 |
+
'example_count': len(examples),
|
| 58 |
+
'fields': sorted(fields),
|
| 59 |
+
'errors': errors,
|
| 60 |
+
'sample': examples[0] if examples else None
|
| 61 |
+
}
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
def analyze_json_array(filepath: Path) -> Dict[str, Any]:
|
| 65 |
+
"""Analyze JSON array format dataset."""
|
| 66 |
+
print(f"\n📊 Analyzing: {filepath.name}")
|
| 67 |
+
print(f" Size: {filepath.stat().st_size / (1024*1024):.2f} MB")
|
| 68 |
+
|
| 69 |
+
try:
|
| 70 |
+
with open(filepath, 'r', encoding='utf-8') as f:
|
| 71 |
+
data = json.load(f)
|
| 72 |
+
|
| 73 |
+
if not isinstance(data, list):
|
| 74 |
+
print(f" ❌ Not a JSON array!")
|
| 75 |
+
return None
|
| 76 |
+
|
| 77 |
+
fields = set()
|
| 78 |
+
if data:
|
| 79 |
+
for ex in data[:100]:
|
| 80 |
+
if isinstance(ex, dict):
|
| 81 |
+
fields.update(ex.keys())
|
| 82 |
+
|
| 83 |
+
print(f" ✅ Valid: {len(data):,} examples")
|
| 84 |
+
print(f" 📝 Fields: {', '.join(sorted(fields))}")
|
| 85 |
+
|
| 86 |
+
return {
|
| 87 |
+
'filename': filepath.name,
|
| 88 |
+
'format': 'JSON Array',
|
| 89 |
+
'size_mb': filepath.stat().st_size / (1024*1024),
|
| 90 |
+
'example_count': len(data),
|
| 91 |
+
'fields': sorted(fields),
|
| 92 |
+
'errors': [],
|
| 93 |
+
'sample': data[0] if data else None
|
| 94 |
+
}
|
| 95 |
+
except Exception as e:
|
| 96 |
+
print(f" ❌ Error: {e}")
|
| 97 |
+
return None
|
| 98 |
+
|
| 99 |
+
|
| 100 |
+
def analyze_parquet(filepath: Path) -> Dict[str, Any]:
|
| 101 |
+
"""Analyze Parquet format dataset."""
|
| 102 |
+
if not PANDAS_AVAILABLE:
|
| 103 |
+
print(f"\n⚠️ Skipping {filepath.name} - pandas not installed")
|
| 104 |
+
return None
|
| 105 |
+
|
| 106 |
+
print(f"\n📊 Analyzing: {filepath.name}")
|
| 107 |
+
print(f" Size: {filepath.stat().st_size / (1024*1024):.2f} MB")
|
| 108 |
+
|
| 109 |
+
try:
|
| 110 |
+
df = pd.read_parquet(filepath)
|
| 111 |
+
|
| 112 |
+
print(f" ✅ Valid: {len(df):,} examples")
|
| 113 |
+
print(f" 📝 Columns: {', '.join(df.columns.tolist())}")
|
| 114 |
+
|
| 115 |
+
return {
|
| 116 |
+
'filename': filepath.name,
|
| 117 |
+
'format': 'Parquet',
|
| 118 |
+
'size_mb': filepath.stat().st_size / (1024*1024),
|
| 119 |
+
'example_count': len(df),
|
| 120 |
+
'fields': df.columns.tolist(),
|
| 121 |
+
'errors': [],
|
| 122 |
+
'sample': df.iloc[0].to_dict() if len(df) > 0 else None
|
| 123 |
+
}
|
| 124 |
+
except Exception as e:
|
| 125 |
+
print(f" ❌ Error: {e}")
|
| 126 |
+
return None
|
| 127 |
+
|
| 128 |
+
|
| 129 |
+
def main():
|
| 130 |
+
"""Main analysis function."""
|
| 131 |
+
print("=" * 70)
|
| 132 |
+
print("N8N DATASET COLLECTION ANALYSIS")
|
| 133 |
+
print("=" * 70)
|
| 134 |
+
|
| 135 |
+
datasets_dir = Path(__file__).parent
|
| 136 |
+
results = []
|
| 137 |
+
|
| 138 |
+
# Find all dataset files
|
| 139 |
+
jsonl_files = sorted(datasets_dir.glob('*.jsonl'))
|
| 140 |
+
json_files = sorted([f for f in datasets_dir.glob('dataset_*.json')])
|
| 141 |
+
parquet_files = sorted(datasets_dir.glob('*.parquet'))
|
| 142 |
+
|
| 143 |
+
print(f"\n📁 Found:")
|
| 144 |
+
print(f" - {len(jsonl_files)} JSONL files")
|
| 145 |
+
print(f" - {len(json_files)} JSON files")
|
| 146 |
+
print(f" - {len(parquet_files)} Parquet files")
|
| 147 |
+
|
| 148 |
+
# Analyze JSONL files
|
| 149 |
+
for filepath in jsonl_files:
|
| 150 |
+
result = analyze_jsonl(filepath)
|
| 151 |
+
if result:
|
| 152 |
+
results.append(result)
|
| 153 |
+
|
| 154 |
+
# Analyze JSON array files
|
| 155 |
+
for filepath in json_files:
|
| 156 |
+
result = analyze_json_array(filepath)
|
| 157 |
+
if result:
|
| 158 |
+
results.append(result)
|
| 159 |
+
|
| 160 |
+
# Analyze Parquet files
|
| 161 |
+
for filepath in parquet_files:
|
| 162 |
+
result = analyze_parquet(filepath)
|
| 163 |
+
if result:
|
| 164 |
+
results.append(result)
|
| 165 |
+
|
| 166 |
+
# Summary
|
| 167 |
+
print("\n" + "=" * 70)
|
| 168 |
+
print("COLLECTION SUMMARY")
|
| 169 |
+
print("=" * 70)
|
| 170 |
+
|
| 171 |
+
total_examples = sum(r['example_count'] for r in results)
|
| 172 |
+
total_size = sum(r['size_mb'] for r in results)
|
| 173 |
+
|
| 174 |
+
print(f"\n📦 Total Datasets: {len(results)}")
|
| 175 |
+
print(f"📝 Total Examples: {total_examples:,}")
|
| 176 |
+
print(f"💾 Total Size: {total_size:.2f} MB ({total_size/1024:.2f} GB)")
|
| 177 |
+
|
| 178 |
+
# Detailed table
|
| 179 |
+
print("\n" + "-" * 70)
|
| 180 |
+
print(f"{'Dataset':<45} {'Format':<12} {'Examples':>12}")
|
| 181 |
+
print("-" * 70)
|
| 182 |
+
|
| 183 |
+
for r in sorted(results, key=lambda x: x['example_count'], reverse=True):
|
| 184 |
+
print(f"{r['filename']:<45} {r['format']:<12} {r['example_count']:>12,}")
|
| 185 |
+
|
| 186 |
+
# Field analysis
|
| 187 |
+
print("\n" + "=" * 70)
|
| 188 |
+
print("FIELD ANALYSIS")
|
| 189 |
+
print("=" * 70)
|
| 190 |
+
|
| 191 |
+
field_counts = defaultdict(int)
|
| 192 |
+
for r in results:
|
| 193 |
+
for field in r['fields']:
|
| 194 |
+
field_counts[field] += 1
|
| 195 |
+
|
| 196 |
+
print(f"\nCommon fields across datasets:")
|
| 197 |
+
for field, count in sorted(field_counts.items(), key=lambda x: x[1], reverse=True):
|
| 198 |
+
print(f" {field:<30} (in {count}/{len(results)} datasets)")
|
| 199 |
+
|
| 200 |
+
# Sample structure
|
| 201 |
+
print("\n" + "=" * 70)
|
| 202 |
+
print("SAMPLE STRUCTURE")
|
| 203 |
+
print("=" * 70)
|
| 204 |
+
|
| 205 |
+
for r in results[:2]: # Show first 2 samples
|
| 206 |
+
if r['sample']:
|
| 207 |
+
print(f"\n{r['filename']}:")
|
| 208 |
+
print(f" Fields: {list(r['sample'].keys())}")
|
| 209 |
+
for key in list(r['sample'].keys())[:3]: # Show first 3 fields
|
| 210 |
+
value = str(r['sample'][key])[:100]
|
| 211 |
+
print(f" {key}: {value}...")
|
| 212 |
+
|
| 213 |
+
print("\n" + "=" * 70)
|
| 214 |
+
|
| 215 |
+
|
| 216 |
+
if __name__ == '__main__':
|
| 217 |
+
main()
|
datasets/automation_domain_knowledge.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c9a974022a62bbcc33d2f132b8653d2da211e86381bfecb938dc7e3e5f2665d8
|
| 3 |
+
size 12431983
|
datasets/code_context_dataset.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
datasets/convert_parquet.py
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Step 1: Convert Parquet to JSONL
|
| 4 |
+
|
| 5 |
+
Converts the n8n_workflows_templates_dataset.parquet file
|
| 6 |
+
to JSONL format for consistency with other datasets.
|
| 7 |
+
"""
|
| 8 |
+
|
| 9 |
+
import pandas as pd
|
| 10 |
+
import json
|
| 11 |
+
from pathlib import Path
|
| 12 |
+
|
| 13 |
+
def convert_parquet_to_jsonl(parquet_file, output_file):
|
| 14 |
+
"""Convert Parquet dataset to JSONL format."""
|
| 15 |
+
print(f"Loading {parquet_file}...")
|
| 16 |
+
df = pd.read_parquet(parquet_file)
|
| 17 |
+
|
| 18 |
+
print(f"Loaded {len(df):,} examples")
|
| 19 |
+
print(f"Columns: {list(df.columns)}")
|
| 20 |
+
|
| 21 |
+
print(f"\nConverting to JSONL...")
|
| 22 |
+
with open(output_file, 'w', encoding='utf-8') as f:
|
| 23 |
+
for idx, row in df.iterrows():
|
| 24 |
+
if idx % 5000 == 0 and idx > 0:
|
| 25 |
+
print(f" Converted {idx:,} / {len(df):,} examples...")
|
| 26 |
+
|
| 27 |
+
# Convert row to dictionary and write as JSON line
|
| 28 |
+
f.write(json.dumps(row.to_dict()) + '\n')
|
| 29 |
+
|
| 30 |
+
print(f"\n✅ Conversion complete!")
|
| 31 |
+
print(f" Input: {parquet_file} ({len(df):,} examples)")
|
| 32 |
+
print(f" Output: {output_file}")
|
| 33 |
+
|
| 34 |
+
# Verify file size
|
| 35 |
+
output_size = Path(output_file).stat().st_size / (1024 * 1024)
|
| 36 |
+
print(f" Size: {output_size:.2f} MB")
|
| 37 |
+
|
| 38 |
+
# Validate by reading first line
|
| 39 |
+
with open(output_file, 'r', encoding='utf-8') as f:
|
| 40 |
+
first_line = f.readline()
|
| 41 |
+
sample = json.loads(first_line)
|
| 42 |
+
print(f"\n📝 Sample structure:")
|
| 43 |
+
print(f" Fields: {list(sample.keys())}")
|
| 44 |
+
|
| 45 |
+
return len(df)
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
if __name__ == '__main__':
|
| 49 |
+
parquet_file = 'n8n_workflows_templates_dataset.parquet'
|
| 50 |
+
output_file = 'n8n_workflows_templates.jsonl'
|
| 51 |
+
|
| 52 |
+
count = convert_parquet_to_jsonl(parquet_file, output_file)
|
| 53 |
+
print(f"\n🎉 Successfully converted {count:,} examples to JSONL format!")
|
datasets/dataset_001.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
datasets/dataset_002.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
datasets/dataset_003.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:849d7429a8125fe999665f60c48362e51ea1f3f229a71c071de408e2e70f86b8
|
| 3 |
+
size 14038102
|
datasets/deduplicate_datasets.py
ADDED
|
@@ -0,0 +1,224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Step 2: Comprehensive Deduplication
|
| 4 |
+
|
| 5 |
+
Deduplicates workflows across all datasets by hashing the JSON content.
|
| 6 |
+
Creates a unified, deduplicated master dataset.
|
| 7 |
+
"""
|
| 8 |
+
|
| 9 |
+
import json
|
| 10 |
+
import hashlib
|
| 11 |
+
from pathlib import Path
|
| 12 |
+
from collections import defaultdict
|
| 13 |
+
from typing import List, Dict, Any, Tuple
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
def load_dataset(filepath: Path) -> List[Dict[str, Any]]:
|
| 17 |
+
"""Load dataset with automatic format detection."""
|
| 18 |
+
# Try JSON array first
|
| 19 |
+
try:
|
| 20 |
+
with open(filepath, 'r', encoding='utf-8') as f:
|
| 21 |
+
first_char = f.read(1)
|
| 22 |
+
f.seek(0)
|
| 23 |
+
|
| 24 |
+
if first_char == '[':
|
| 25 |
+
# JSON array format
|
| 26 |
+
return json.load(f)
|
| 27 |
+
else:
|
| 28 |
+
# JSONL format
|
| 29 |
+
examples = []
|
| 30 |
+
for line_num, line in enumerate(f, 1):
|
| 31 |
+
line = line.strip()
|
| 32 |
+
if line:
|
| 33 |
+
try:
|
| 34 |
+
examples.append(json.loads(line))
|
| 35 |
+
except json.JSONDecodeError as e:
|
| 36 |
+
if line_num <= 5: # Only print first 5 errors
|
| 37 |
+
print(f" ⚠️ Skipping line {line_num}: {e}")
|
| 38 |
+
return examples
|
| 39 |
+
except Exception as e:
|
| 40 |
+
print(f" ❌ Error loading {filepath.name}: {e}")
|
| 41 |
+
return []
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
def extract_workflow_json(example: Dict[str, Any]) -> str:
|
| 46 |
+
"""Extract workflow JSON from example (handles different field names)."""
|
| 47 |
+
# Try different field names
|
| 48 |
+
for field in ['json', 'response', 'workflow', 'n8n_json']:
|
| 49 |
+
if field in example:
|
| 50 |
+
value = example[field]
|
| 51 |
+
# If it's a string, return it
|
| 52 |
+
if isinstance(value, str):
|
| 53 |
+
return value
|
| 54 |
+
# If it's a dict, convert to JSON string
|
| 55 |
+
if isinstance(value, dict):
|
| 56 |
+
return json.dumps(value, sort_keys=True)
|
| 57 |
+
|
| 58 |
+
# Fallback: use the entire example
|
| 59 |
+
return json.dumps(example, sort_keys=True)
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
def hash_workflow(workflow_json: str) -> str:
|
| 63 |
+
"""Create hash of workflow JSON for deduplication."""
|
| 64 |
+
# Normalize whitespace and sort keys for consistent hashing
|
| 65 |
+
try:
|
| 66 |
+
# Parse and re-serialize to normalize formatting
|
| 67 |
+
workflow_obj = json.loads(workflow_json)
|
| 68 |
+
normalized = json.dumps(workflow_obj, sort_keys=True, separators=(',', ':'))
|
| 69 |
+
except:
|
| 70 |
+
# If parsing fails, use original string
|
| 71 |
+
normalized = workflow_json.strip()
|
| 72 |
+
|
| 73 |
+
return hashlib.md5(normalized.encode('utf-8')).hexdigest()
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
def deduplicate_datasets(datasets_dir: Path) -> Tuple[List[Dict], Dict]:
|
| 77 |
+
"""Deduplicate all datasets."""
|
| 78 |
+
print("=" * 70)
|
| 79 |
+
print("DEDUPLICATION ANALYSIS")
|
| 80 |
+
print("=" * 70)
|
| 81 |
+
|
| 82 |
+
# Find all dataset files
|
| 83 |
+
jsonl_files = sorted(datasets_dir.glob('*.jsonl'))
|
| 84 |
+
json_files = sorted([f for f in datasets_dir.glob('dataset_*.json')])
|
| 85 |
+
|
| 86 |
+
all_datasets = jsonl_files + json_files
|
| 87 |
+
|
| 88 |
+
print(f"\n📁 Processing {len(all_datasets)} datasets:\n")
|
| 89 |
+
|
| 90 |
+
# Track hashes and their sources
|
| 91 |
+
hash_to_example = {}
|
| 92 |
+
hash_to_sources = defaultdict(list)
|
| 93 |
+
duplicates_found = defaultdict(list)
|
| 94 |
+
|
| 95 |
+
total_examples = 0
|
| 96 |
+
|
| 97 |
+
# Process each dataset
|
| 98 |
+
for filepath in all_datasets:
|
| 99 |
+
print(f"📊 Loading {filepath.name}...")
|
| 100 |
+
|
| 101 |
+
# Load with auto-detection
|
| 102 |
+
examples = load_dataset(filepath)
|
| 103 |
+
|
| 104 |
+
print(f" {len(examples):,} examples")
|
| 105 |
+
total_examples += len(examples)
|
| 106 |
+
|
| 107 |
+
# Hash each example
|
| 108 |
+
for idx, example in enumerate(examples):
|
| 109 |
+
workflow_json = extract_workflow_json(example)
|
| 110 |
+
workflow_hash = hash_workflow(workflow_json)
|
| 111 |
+
|
| 112 |
+
# Track source
|
| 113 |
+
hash_to_sources[workflow_hash].append(filepath.name)
|
| 114 |
+
|
| 115 |
+
# If this is the first time we've seen this hash, keep it
|
| 116 |
+
if workflow_hash not in hash_to_example:
|
| 117 |
+
hash_to_example[workflow_hash] = example
|
| 118 |
+
else:
|
| 119 |
+
# This is a duplicate
|
| 120 |
+
duplicates_found[filepath.name].append({
|
| 121 |
+
'index': idx,
|
| 122 |
+
'hash': workflow_hash,
|
| 123 |
+
'first_seen_in': hash_to_sources[workflow_hash][0]
|
| 124 |
+
})
|
| 125 |
+
|
| 126 |
+
# Generate report
|
| 127 |
+
print("\n" + "=" * 70)
|
| 128 |
+
print("DEDUPLICATION RESULTS")
|
| 129 |
+
print("=" * 70)
|
| 130 |
+
|
| 131 |
+
unique_count = len(hash_to_example)
|
| 132 |
+
duplicate_count = total_examples - unique_count
|
| 133 |
+
duplicate_pct = (duplicate_count / total_examples * 100) if total_examples > 0 else 0
|
| 134 |
+
|
| 135 |
+
print(f"\n📝 Total Examples Processed: {total_examples:,}")
|
| 136 |
+
print(f"✨ Unique Workflows: {unique_count:,}")
|
| 137 |
+
print(f"🔄 Duplicates Found: {duplicate_count:,} ({duplicate_pct:.1f}%)")
|
| 138 |
+
|
| 139 |
+
# Detailed duplicate report by dataset
|
| 140 |
+
print("\n" + "-" * 70)
|
| 141 |
+
print("DUPLICATES BY DATASET")
|
| 142 |
+
print("-" * 70)
|
| 143 |
+
|
| 144 |
+
for dataset_name in sorted(duplicates_found.keys()):
|
| 145 |
+
dupes = duplicates_found[dataset_name]
|
| 146 |
+
print(f"\n{dataset_name}:")
|
| 147 |
+
print(f" {len(dupes):,} duplicate examples")
|
| 148 |
+
|
| 149 |
+
# Show which datasets they duplicate
|
| 150 |
+
sources = defaultdict(int)
|
| 151 |
+
for dupe in dupes:
|
| 152 |
+
sources[dupe['first_seen_in']] += 1
|
| 153 |
+
|
| 154 |
+
for source, count in sorted(sources.items(), key=lambda x: x[1], reverse=True):
|
| 155 |
+
if source != dataset_name:
|
| 156 |
+
print(f" - {count:,} duplicates from {source}")
|
| 157 |
+
|
| 158 |
+
# Cross-dataset duplicate analysis
|
| 159 |
+
print("\n" + "-" * 70)
|
| 160 |
+
print("CROSS-DATASET DUPLICATE PATTERNS")
|
| 161 |
+
print("-" * 70)
|
| 162 |
+
|
| 163 |
+
cross_dataset_hashes = {h: srcs for h, srcs in hash_to_sources.items() if len(set(srcs)) > 1}
|
| 164 |
+
print(f"\n{len(cross_dataset_hashes):,} workflows appear in multiple datasets")
|
| 165 |
+
|
| 166 |
+
# Count common duplicates between specific datasets
|
| 167 |
+
dataset_pairs = defaultdict(int)
|
| 168 |
+
for sources in cross_dataset_hashes.values():
|
| 169 |
+
unique_sources = sorted(set(sources))
|
| 170 |
+
if len(unique_sources) >= 2:
|
| 171 |
+
for i, src1 in enumerate(unique_sources):
|
| 172 |
+
for src2 in unique_sources[i+1:]:
|
| 173 |
+
pair = tuple(sorted([src1, src2]))
|
| 174 |
+
dataset_pairs[pair] += 1
|
| 175 |
+
|
| 176 |
+
print("\nTop dataset overlaps:")
|
| 177 |
+
for (ds1, ds2), count in sorted(dataset_pairs.items(), key=lambda x: x[1], reverse=True)[:10]:
|
| 178 |
+
print(f" {ds1} ↔ {ds2}: {count:,} shared workflows")
|
| 179 |
+
|
| 180 |
+
# Statistics
|
| 181 |
+
stats = {
|
| 182 |
+
'total_examples': total_examples,
|
| 183 |
+
'unique_workflows': unique_count,
|
| 184 |
+
'duplicates': duplicate_count,
|
| 185 |
+
'duplicate_percentage': duplicate_pct,
|
| 186 |
+
'duplicates_by_dataset': {k: len(v) for k, v in duplicates_found.items()},
|
| 187 |
+
'cross_dataset_duplicates': len(cross_dataset_hashes)
|
| 188 |
+
}
|
| 189 |
+
|
| 190 |
+
return list(hash_to_example.values()), stats
|
| 191 |
+
|
| 192 |
+
|
| 193 |
+
def save_deduplicated_dataset(examples: List[Dict], output_file: Path):
|
| 194 |
+
"""Save deduplicated dataset to JSONL."""
|
| 195 |
+
print(f"\n💾 Saving deduplicated dataset to {output_file.name}...")
|
| 196 |
+
|
| 197 |
+
with open(output_file, 'w', encoding='utf-8') as f:
|
| 198 |
+
for example in examples:
|
| 199 |
+
f.write(json.dumps(example) + '\n')
|
| 200 |
+
|
| 201 |
+
file_size = output_file.stat().st_size / (1024 * 1024)
|
| 202 |
+
print(f"✅ Saved {len(examples):,} unique workflows ({file_size:.2f} MB)")
|
| 203 |
+
|
| 204 |
+
|
| 205 |
+
if __name__ == '__main__':
|
| 206 |
+
datasets_dir = Path('.')
|
| 207 |
+
output_file = Path('n8n_master_deduplicated.jsonl')
|
| 208 |
+
|
| 209 |
+
# Run deduplication
|
| 210 |
+
unique_examples, stats = deduplicate_datasets(datasets_dir)
|
| 211 |
+
|
| 212 |
+
# Save deduplicated dataset
|
| 213 |
+
save_deduplicated_dataset(unique_examples, output_file)
|
| 214 |
+
|
| 215 |
+
# Summary
|
| 216 |
+
print("\n" + "=" * 70)
|
| 217 |
+
print("SUMMARY")
|
| 218 |
+
print("=" * 70)
|
| 219 |
+
print(f"\n✨ Deduplication complete!")
|
| 220 |
+
print(f" Original: {stats['total_examples']:,} examples")
|
| 221 |
+
print(f" Deduplicated: {stats['unique_workflows']:,} examples")
|
| 222 |
+
print(f" Removed: {stats['duplicates']:,} duplicates ({stats['duplicate_percentage']:.1f}%)")
|
| 223 |
+
print(f"\n📁 Output: {output_file}")
|
| 224 |
+
print("=" * 70)
|
datasets/n8n_github_workflows.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f69a2d54b82868a655fdf09d882f1f9604d992e730f3e6f4d0e8f4244300e958
|
| 3 |
+
size 32672397
|
datasets/n8n_master.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:af720af59c4b53ad14b737b0ad0a6e52ec548cfa44ac48053a89afe0ba26ef27
|
| 3 |
+
size 296484864
|
datasets/n8n_toolkit_sharegpt.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e3a850fbb494fc158d938dd747903d24da18a15c9cf8f70e87d98e68ba8f571b
|
| 3 |
+
size 90104588
|
datasets/reddit_solutions.jsonl
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"instruction":"[Context: r/n8n] I made an n8n Cheat Sheet!\n\n","output":"Top Answer:\nI am not really sure who this is for, but ... some feedback for usability and aesthetics:\n\n - Avoid arbitrarily changing text justification (right-aligned under 'Setting' is odd and hard to read, centred text under 'Description' is hard to read, etc.). Left-align everything by default - things work best centred when they are similar widths, otherwise your eyes have to scan to find the start of each line.\n\n - Comments in the 'Docker' section could be aligned horizontally.\n\n - 'Nodes Common Setting' section has many lines with only one or two words, and inconsistent vertical alignment per row.\n\n - Whitespace consistency - 'HTTP Request' is good, the rest are too tight.","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1jy1wdc/i_made_an_n8n_cheat_sheet/"}
|
| 2 |
+
{"instruction":"[Context: r/n8n] I built an AI system that scrapes stories off the internet and generates a daily newsletter (now at 10,000 subscribers)\n\nSo I built an [AI newsletter](https://recap.aitools.inc/) that isn’t written by me — it’s completely written by an n8n workflow that I built. Each day, the system scrapes close to 100 AI news stories off the internet → saves the stories in a data lake as markdown file → and then runs those through this n8n workflow to generate a final newsletter that gets sent out to the subscribers.\n\nI’ve been iterating on the main prompts used in this workflow over the past 5 months and have got it to the point where it is handling 95% of the process for writing each edition of the newsletter. It currently automatically handles:\n\n- Scraping news stories sourced all over the internet from Twitter / Reddit / HackerNews / AI Blogs / Google News Feeds\n- Loading all of those stories up and having an \"AI Editor\" pick the top 3-4 we want to feature in the newsletter\n- Taking the source material and actually writing each core newsletter segment\n- Writing all of the supplementary sections like the intro + a \"Shortlist\" section that includes other AI story links\n- Formatting all of that output as markdown so it is easy to copy into Beehiiv and schedule with a few clicks\n\nWhat started as an interesting pet project AI newsletter now has several thousand subscribers and has an open rate above 20%\n\n## Data Ingestion Workflow Breakdown\n\nThis is the foundation of the newsletter system as I wanted complete control of where the stories are getting sourced from and need the content of each story in an easy to consume format like markdown so I can easily prompt against it. I wrote a bit more about this automation on this [reddit post](https://www.reddit.com/r/n8n/comments/1kzaysv/i_built_a_workflow_to_scrape_virtually_any_news/) but will cover the key parts again here:\n\n1. The approach I took here involves creating a \"feed\" using RSS.app for every single news source I want to pull stories from (Twitter / Reddit / HackerNews / AI Blogs / Google News Feed / etc).\n 1. Each feed I create gives an endpoint I can simply make an HTTP request to get a list of every post / content piece that rss.app was able to extract.\n 2. With enough feeds configured, I’m confident that I’m able to detect every major story in the AI / Tech space for the day.\n2. After a feed is created in rss.app, I wire it up to the n8n workflow on a Scheduled Trigger that runs every few hours to get the latest batch of news stories.\n3. Once a new story is detected from that feed, I take that list of urls given back to me and start the process of scraping each one:\n 1. This is done by calling into a `scrape_url` sub-workflow that I built out. This uses the Firecrawl API `/scrape` endpoint to scrape the contents of the news story and returns its text content back in markdown format\n4. Finally, I take the markdown content that was scraped for each story and save it into an S3 bucket so I can later query and use this data when it is time to build the prompts that write the newsletter.\n\nSo by the end any given day with these scheduled triggers running across a dozen different feeds, I end up scraping close to 100 different AI news stories that get saved in an easy to use format that I will later prompt against.\n\n## Newsletter Generator Workflow Breakdown\n\nThis workflow is the big one that actually loads up all scraped news content, picks the top stories, and writes the full newsletter.\n\n### 1. Trigger / Inputs\n\n- I use an n8n form trigger that simply let’s me pick the date I want to generate the newsletter for\n- I can optionally pass in the previous day’s newsletter text content which gets loaded into the prompts I build to write the story so I can avoid duplicated stories on back to back days.\n\n### 2. Loading Scraped News Stories from the Data Lake\n\nOnce the workflow is started, the first two sections are going to load up all of the news stories that were scraped over the course of the day. I do this by:\n\n- Running a simple search operation on our S3 bucket prefixed by the date like: `2025-06-10/` (gives me all stories scraped on June 10th)\n- Filtering these results to only give me back the markdown files that end in an `.md` extension (needed because I am also scraping and saving the raw HTML as well)\n- Finally read each of these files and load the text content of each file and format it nicely so I can include that text in each prompt to later generate the newsletter.\n\n### 3. AI Editor Prompt\n\nWith all of that text content in hand, I move on to the **AI Editor** section of the automation responsible for picking out the top 3-4 stories for the day relevant to the audience. This prompt is very specific to what I’m going for with this specific content, so if you want to build something similar you should expect ***a lot*** of trial and error to get this to do what you want to. It's pretty beefy.\n\n- Once the top stories are selected, that selection is shared in a slack channel using a \"Human in the loop\" approach where it will wait for me to approve the selected stories or provide feedback.\n- For example, I may disagree with the top selected story on that day and I can type out in plain english to \"Look for another story in the top spot, I don't like it for XYZ reason\".\n- The workflow will either look for my approval or take my feedback into consideration and try selecting the top stories again before continuing on.\n\n### 4. Subject Line Prompt\n\nOnce the top stories are approved, the automation moves on to a very similar step for writing the subject line. It will give me its top selected option and 3-5 alternatives for me to review. Once again this get's shared to slack, and I can approve the selected subject line or tell it to use a different one in plain english.\n\n### 5. Write “Core” Newsletter Segments\n\nNext up, I move on to the part of the automation that is responsible for writing the \"core\" content of the newsletter. There's quite a bit going on here:\n\n- The action inside this section of the workflow is to split out each of the stop news stories from before and start looping over them. This allows me to write each section one by one instead of needing a prompt to one-shot the entire thing. In my testing, I found this to follow my instructions / constraints in the prompt much better.\n- For each top story selected, I have a list of \"content identifiers\" attached to it which corresponds to a file stored in the S3 bucket. Before I start writing, I go back to our S3 bucket and download each of these markdown files so the system is only looking at and passing in the relevant context when it comes time to prompt. The number of tokens used on the API calls to LLMs get very big when passing in all news stories to a prompt so this should be as focused as possible.\n- With all of this context in hand, I then make the LLM call and run a mega-prompt that is setup to generate a single core newsletter section. The core newsletter sections follow a very structured format so this was relatively easier to prompt against (compared to picking out the top stories). If that is not the case for you, you may need to get a bit creative to vary the structure / final output.\n- This process repeats until I have a newsletter section written out for each of the top selected stories for the day.\n\nYou may have also noticed there is a branch here that goes off and will conditionally try to scrape more URLs. We do this to try and scrape more “primary source” materials from any news story we have loaded into context. \n\nSay Open AI releases a new model and the story we scraped was from Tech Crunch. It’s unlikely that tech crunch is going to give me all details necessary to really write something really good about the new model so I look to see if there’s a url/link included on the scraped page back to the Open AI blog or some other announcement post.\n\nIn short, I just want to get as many primary sources as possible here and build up better context for the main prompt that writes the newsletter section.\n\n### 6. Final Touches (Final Nodes / Sections)\n\n- I have a prompt to generate an intro section for the newsletter based off all of the previously generated content\n - I then have a prompt to generate a newsletter section called \"The Shortlist\" which creates a list of other AI stories that were interesting but didn't quite make the cut for top selected stories\n- Lastly, I take the output from all previous node, format it as markdown, and then post it into an internal slack channel so I can copy this final output and paste it into the Beehiiv editor and schedule to send for the next morning.\n\n## Workflow Link + Other Resources\n\n- Github workflow links:\n - AI News Story / Data Ingestion Workflow: https://github.com/lucaswalter/n8n-ai-workflows/blob/main/ai_news_data_ingestion.json\n - Firecrawl Scrape Url Sub-Workflow: https://github.com/lucaswalter/n8n-ai-workflows/blob/main/firecrawl_scrape_url.json\n - AI Newsletter Generator Workflow: https://github.com/lucaswalter/n8n-ai-workflows/blob/main/ai_newsletter_generator.json\n- YouTube video that walks through this workflow step-by-step: https://www.youtube.com/watch?v=Nv5_LU0q1IY\n\nAlso wanted to share that my team and I run a free Skool community called [AI Automation Mastery](https://www.skool.com/ai-automation-mastery-group) where we build and share the automations we are working on. Would love to have you as a part of it if you are interested!","output":"Top Answer:\nDude, you are on fire! Thank you for this.","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1l9pff8/i_built_an_ai_system_that_scrapes_stories_off_the/"}
|
| 3 |
+
{"instruction":"[Context: r/n8n] You can safely discard your 159 node n8n automation that created 500 AI shorts in 5 seconds\n\n","output":"Top Answer:\nGood","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1lvfz70/you_can_safely_discard_your_159_node_n8n/"}
|
| 4 |
+
{"instruction":"[Context: r/n8n] Looks ugly but it is now managing my investments...\n\nI used n8n to build an automated crypto market analyst that basically tells me what to do with my money.\n\nIt’s not a day trader but more like a mid-term investor that looks for good entry points to accumulate and smart moments to take profits, all while keeping track of the bigger macro picture and giving a sense of where we are in the cycle.\n\nI feed it tons of data: macro, meso, and micro indicators, on-chain metrics, sentiment, and live news and it spits out quick, digestible insights.\n\nIf you follow crypto, you probably know Benjamin Cowen. His cycle-based, data-driven approach inspired this system, though it’s powered by GPT-5 and built to process far more information at once.\n\nIt can produce full geek-level reports or just simple, actionable daily insights.\n\nA bunch of people asked me to share what it’s saying, so I set up an account that automatically posts its thoughts here:\n\n[x.com/InvestWithGPT](https://x.com/InvestWithGPT)\n\nI know people are both curious and skeptical about this kind of thing so feel free to roast me or ask anything.\n\n**UPDATE:** \n\nI made a lighter, prettier version available on [hunchmachine.com](http://hunchmachine.com)","output":"Top Answer:\nThat’s a lot of nodes to be told to just stack sats.","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1oc0mup/looks_ugly_but_it_is_now_managing_my_investments/"}
|
| 5 |
+
{"instruction":"[Context: r/n8n] Your slop won’t sell\n\nGuys, 99% of posts I see here is by people with no technical knowledge. Your ai slop that makes reels or ai slop generated emails are useless. There is like a 1000 of you here making the same ai garbage slop that nobody needs. If you want easy money go do some rug pulling in crypto. Automation is an actual real business and your retarded pipeline is not unique and will only be good at one thing-Wasting tokens. Pls, just stop. There is enough ai slop out there. Learn to code, learn to actually do shit. \n\nEdit:\nMany people don’t seem to understand that I don’t have an issue with an honest businessmen out there automating something for themselves with a simple pipeline. That’s what n8n is for. My issue is with people who make a brain dead pipeline that like scrapes the web or something and then throws that shit into ai model to output a video reel. They the proceed to call themselves an automation engineer and start looking for work. It’s as if I built a hut out of mud and started calling myself a construction developer and offer my services to build skyscrapers. My mud hut will stand only as long as it doesn’t rain. And when the rain comes all these “automation” experts will be flooded with liability since they didn’t actually take time to learn about what they are doing. ","output":"Top Answer:\nYou’re probably correct, but what is the point of this post?\n\nWe, advanced users, who are deep into tech might agree with the state of this sub and n8n hype… but what is that for? Echo chamber?\n\nGatekeeping does nothing. This message is twofold: you either target snake oils salesmen (who won’t think you’re talking to them) or many many cases beginners will read this and get discouraged (yeah even soon-to-be good devs can too have discouraging beginnings)\n\nEither way nothing good really happens.\n\nI propose otherwise: what about instead of criticizing, we raise awareness? We teach the right path? We encourage and train the next generation of developers?\n\nIf I was starting today, that’s what I would like to have seen. Plant hope dude.","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1lk9jvx/your_slop_wont_sell/"}
|
| 6 |
+
{"instruction":"[Context: r/n8n] i automated the entire cold outreach process in n8n (sorry to whoever's job i just replaced)\n\n","output":"Top Answer:\nUpdate - I have made the cold outreach N8N automation public\n\nhttps://www.reddit.com/r/n8n/comments/1jm842j/update_i_automated_the_entire_cold_outreach/\n\npast week I’ve been building a complete cold outreach automation system that does everything from finding prospects to sending perfectly personalized messages. here’s a sneak peek\n\nwhat my automation does:\n\nstep 1 - finding prospects - \ngenerates targeted prospect lists using apollo, basically tells apollo exactly what kind of leads we want and gets the url for them (no apollo account needed)\n\nstep 2 - processing the data - \nscrapes the apollo url using apify, cleans all the data, verifies which emails actually are deliverable, then stores everything in airtable\n\nstep 3 - deep research engine - \nthis is where it gets interesting - the system runs 5 parallel research processes:\n\n- searches the web about the lead and their company for news, reviews, anything useful\n- scrapes their linkedin profile for background\n- analyzes their linkedin posts to understand what they care about\n- scrapes and analyzes their company linkedin profile and posts\n- scrapes their company website each of these gets processed by ai to generate detailed reports, all stored back in airtable\n\nstep 4 - scoring and email generation - \nscores each lead based on whatever criteria matters for your business, then the ai drafts personalized emails using all the research\n\nstep 5 - outreach automation - \nsends personalized emails using smartlead and uses phantom buster to send linkedin connection requests (with daily limit of 15-20 connection requests to stay under linkedin’s radar)\n\n## the exact cost breakdown:\n\n- at 100 leads/month: $0.91 per lead\n- at 500 leads/month: $0.18 per lead\n- at 1,000 leads/month: $0.09 per lead\n- at 5,000 leads/month: $0.02 per lead\n- at 10,000 leads/month: $0.01 per lead\n\nmonthly fixed costs:\n\n- phantom buster: $49/month (after free trial)\n- apify basic plan: $39/month (comes with $39 in credits)\n- apify scraping: ~$1.20 per 1000 leads for apollo, ~$1.20 per 1000 for linkedin\n- n8n: $0 (self-hosted)\n- email service: $3/month\n\nnote: cost per lead drops as you scale because the fixed monthly costs get spread across more leads\n\ntools used: n8n, chatgpt api, google gemini, tavily search, apify, firecrawl, airtable, and phantom buster.\n\ni’m planning to share the complete workflow once finished.\nwould this be something you’d use?","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1jgl5ek/i_automated_the_entire_cold_outreach_process_in/"}
|
| 7 |
+
{"instruction":"[Context: r/n8n] I built an n8n workflow that automatically finds clients and made me my first $227. Here’s how it works.\n\nHey everyone,\n\nI wanted to share a small win. I recently made my first $227.74 using a client-finding machine I built in n8n. It's a simple idea but it worked surprisingly well, and I wanted to share the process.\n\nThe Problem\n\nMy goal was to find e-commerce stores with slow websites. Slow sites lose customers, so I knew the owners would be motivated to fix them. The problem was, finding these sites and reaching out to them manually is incredibly time-consuming and boring.\n\nThe Workflow (The Automation)\n\nI built an n8n workflow to do all the heavy lifting for me:\n\nThe Input: The workflow starts with a simple list of e-commerce websites that I provide.\n\nThe Speed Test: It then iterates through the list and uses a Website Speed Test API from RapidAPI to check each site's performance metrics.\n\nThe Filter: I set a simple condition in an IF node (e.g., if the load time is over 3 seconds). Only the slow websites pass this filter and continue to the next step.\n\nThe Outreach: For each slow website, the workflow automatically generates a simple performance report and sends a personalized email to the business owner, showing them the issue with their site.\n\nThe Business Model\n\nThis system started conversations for me on complete autopilot. When I first started, I had knowledge of WordPress and could identify a slow site, but I wasn't a top expert in speed optimization.\n\nSo, when a business owner would reply and we closed a deal, I hired a talented freelancer. to do the actual speed optimization work. I was essentially the automated project manager, connecting the client with the solution.\n80 percent for my self 20 percent to freelancer \n\n\nThe Result\n\nThis simple system has been a great way to start conversations and land a few clients. So far, I've made $227.74. It's not a huge amount, but it proves the concept of using automation to create business opportunities from scratch.\n\nJust wanted to share to show what's possible with a simple idea and a powerful tool like n8n. \n\nBelieve in God \n","output":"Top Answer:\nVery interesting, could you share more details of the flow? Some screenshot or something similar to see how you did it. \n\nAnd thanks for sharing your idea.","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1ndb2tf/i_built_an_n8n_workflow_that_automatically_finds/"}
|
| 8 |
+
{"instruction":"[Context: r/n8n] This Automation Took Me From 0 to 175k Followers in Under 12 Months\n\nFull Video Walkthrough: [Link](https://youtu.be/m-7egcppFPk) \n \n \nThis is the automation that took me from zero to 175K followers in under 12 months across IG, TikTok, and YouTube. \n \nThe premise is simple: scrape social media to identify trending topics in your niche, spot patterns and content gaps, then generate actionable content ideas based on real data. \n \nThat final output comes in two forms: \n\nFirst, you get a daily report sent to your inbox every morning with a big-picture summary of trending topics over the last 24 hours across YouTube, Reddit, Twitter, and the web at large. This includes the top three videos in your niche, rising Reddit posts, trending tweets, and general web stories—all analyzed and synthesized. \n\n\nSecond, we push all the granular data scraped from those sources as well as the potential content scripts to Airtable. These content scripts are the real money maker as they are very detailed (hooks, general storyline, and outro are included) and I took measures to avoid AI slop-- the LLM that produces these is trained on [Kallaway's](https://www.youtube.com/@kallawaymarketing) social media storytelling principles so we have something of actual value to work with. \n \nNow, for how it actually works. \n \nThe workflow is built on the back of four scraping funnels: \n \nYouTube: grabs the top 10 videos in your niche, extracts transcripts via Apify ($0.007 per transcript), filter out shorts, and analyzes the transcript to pull out the relevant information. \n\nReddit: same logic as above, but we instead pull the top 5 rising posts from your subreddit of choice. \n\nTwitter: same logic again, this time look at the top 50 tweets across multiple search terms via Apify's Tweet Scraper V2 at $0.004 per thousand tweets (way cheaper than Twitter's API) \n \nLastly, we use Perplexity for general web catchall to identify larger niche related news. \n \nAll analysis gets aggregated and pushed two directions: the email report generator and the content brainstorm module LLM. Total cost per run comes out to about 32 cents daily. \n \nTruth be told, this isn't just a content tool-- the bones are a scraping template. What you do with the data is up to you. The ultimate goal is to never stare at a blank page hoping content ideas materialize. You need a starting point based on what's actually trending, not what you think is trending (or worse, what's already trending down. \n\nLess doomscrolling, more data-driven creation.","output":"Top Answer:\nOk.","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1oj8q69/this_automation_took_me_from_0_to_175k_followers/"}
|
| 9 |
+
{"instruction":"[Context: r/n8n] I built this AI Automation to write viral TikTok/IG video scripts (got over 1.8 million views on Instagram)\n\nI run an Instagram [account](https://www.instagram.com/davi.d_roberts/reels/) that publishes short form videos each week that cover the top AI news stories. I used to monitor twitter to write these scripts by hand, but it ended up becoming a huge bottleneck and limited the number of videos that could go out each week.\n\nIn order to solve this, I decided to automate this entire process by building a system that scrapes the top AI news stories off the internet each day (from Twitter / Reddit / Hackernews / other sources), saves it in our data lake, loads up that text content to pick out the top stories and write video scripts for each.\n\nThis has saved a ton of manual work having to monitor news sources all day and let’s me plug the script into ElevenLabs / HeyGen to produce the audio + avatar portion of each video.\n\nOne of the recent videos we made this way got over **1.8 million** views on Instagram and I’m confident there will be more hits in the future. It’s pretty random on what will go viral or not, so my plan is to take enough “shots on goal” and continue tuning this prompt to increase my changes of making each video go viral.\n\n## Here’s the workflow breakdown\n\n### 1. Data Ingestion and AI News Scraping\n\nThe first part of this system is actually in a separate workflow I have setup and running in the background. I actually made another [reddit post](https://www.reddit.com/r/n8n/comments/1kzaysv/i_built_a_workflow_to_scrape_virtually_any_news/) that covers this in detail so I’d suggestion you check that out for the full breakdown + how to set it up. I’ll still touch the highlights on how it works here:\n\n1. The main approach I took here involves creating a \"feed\" using RSS.app for every single news source I want to pull stories from (Twitter / Reddit / HackerNews / AI Blogs / Google News Feed / etc).\n 1. Each feed I create gives an endpoint I can simply make an HTTP request to get a list of every post / content piece that rss.app was able to extract.\n 2. With enough feeds configured, I’m confident that I’m able to detect every major story in the AI / Tech space for the day. Right now, there are around ~13 news sources that I have setup to pull stories from every single day.\n2. After a feed is created in rss.app, I wire it up to the n8n workflow on a Scheduled Trigger that runs every few hours to get the latest batch of news stories.\n3. Once a new story is detected from that feed, I take that list of urls given back to me and start the process of scraping each story and returns its text content back in markdown format\n4. Finally, I take the markdown content that was scraped for each story and save it into an S3 bucket so I can later query and use this data when it is time to build the prompts that write the newsletter.\n\nSo by the end any given day with these scheduled triggers running across a dozen different feeds, I end up scraping close to 100 different AI news stories that get saved in an easy to use format that I will later prompt against.\n\n### 2. Loading up and formatting the scraped news stories\n\nOnce the data lake / news storage has plenty of scraped stories saved for the day, we are able to get into the main part of this automation. This kicks off off with a scheduled trigger that runs at 7pm each day and will:\n\n- Search S3 bucket for all markdown files and tweets that were scraped for the day by using a prefix filter\n- Download and extract text content from each markdown file\n- Bundle everything into clean text blocks wrapped in XML tags for better LLM processing - This allows us to include important metadata with each story like the source it came from, links found on the page, and include engagement stats (for tweets).\n\n### 3. Picking out the top stories\n\nOnce everything is loaded and transformed into text, the automation moves on to executing a prompt that is responsible for picking out the top 3-5 stories suitable for an audience of AI enthusiasts and builder’s. The prompt is pretty big here and highly customized for my use case so you will need to make changes for this if you are going forward with implementing the automation itself.\n\nAt a high level, this prompt will:\n\n- Setup the main objective\n- Provides a “curation framework” to follow over the list of news stories that we are passing int\n- Outlines a process to follow while evaluating the stories\n- Details the structured output format we are expecting in order to avoid getting bad data back\n\n```jsx\n<objective>\nAnalyze the provided daily digest of AI news and select the top 3-5 stories most suitable for short-form video content. Your primary goal is to maximize audience engagement (likes, comments, shares, saves).\n\nThe date for today's curation is `{{ new Date(new Date($('schedule_trigger').item.json.timestamp).getTime() + (12 * 60 * 60 * 1000)).format(\"yyyy-MM-dd\", \"America/Chicago\") }}`. Use this to prioritize the most recent and relevant news. You MUST avoid selecting stories that are more than 1 day in the past for this date.\n</objective>\n\n<curation_framework>\nTo identify winning stories, apply the following virality principles. A story must have a strong \"hook\" and fit into one of these categories:\n\n1. **Impactful:** A major breakthrough, industry-shifting event, or a significant new model release (e.g., \"OpenAI releases GPT-5,\" \"Google achieves AGI\").\n2. **Practical:** A new tool, technique, or application that the audience can use *now* (e.g., \"This new AI removes backgrounds from video for free\").\n3. **Provocative:** A story that sparks debate, covers industry drama, or explores an ethical controversy (e.g., \"AI art wins state fair, artists outraged\").\n4. **Astonishing:** A \"wow-factor\" demonstration that is highly visual and easily understood (e.g., \"Watch this robot solve a Rubik's Cube in 0.5 seconds\").\n\n**Hard Filters (Ignore stories that are):**\n* **Ad-driven:** Primarily promoting a paid course, webinar, or subscription service.\n* **Purely Political:** Lacks a strong, central AI or tech component.\n* **Substanceless:** Merely amusing without a deeper point or technological significance.\n</curation_framework>\n\n<hook_angle_framework>\nFor each selected story, create 2-3 compelling hook angles that could open a TikTok or Instagram Reel. Each hook should be designed to stop the scroll and immediately capture attention. Use these proven hook types:\n\n**Hook Types:**\n- **Question Hook:** Start with an intriguing question that makes viewers want to know the answer\n- **Shock/Surprise Hook:** Lead with the most surprising or counterintuitive element\n- **Problem/Solution Hook:** Present a common problem, then reveal the AI solution\n- **Before/After Hook:** Show the transformation or comparison\n- **Breaking News Hook:** Emphasize urgency and newsworthiness\n- **Challenge/Test Hook:** Position as something to try or challenge viewers\n- **Conspiracy/Secret Hook:** Frame as insider knowledge or hidden information\n- **Personal Impact Hook:** Connect directly to viewer's life or work\n\n**Hook Guidelines:**\n- Keep hooks under 10 words when possible\n- Use active voice and strong verbs\n- Include emotional triggers (curiosity, fear, excitement, surprise)\n- Avoid technical jargon - make it accessible\n- Consider adding numbers or specific claims for credibility\n</hook_angle_framework>\n\n<process>\n1. **Ingest:** Review the entire raw text content provided below.\n2. **Deduplicate:** Identify stories covering the same core event. Group these together, treating them as a single story. All associated links will be consolidated in the final output.\n3. **Select & Rank:** Apply the **Curation Framework** to select the 3-5 best stories. Rank them from most to least viral potential.\n4. **Generate Hooks:** For each selected story, create 2-3 compelling hook angles using the **Hook Angle Framework**.\n</process>\n\n<output_format>\nYour final output **must** be a single, valid JSON object and nothing else. Do not include any text, explanations, or markdown formatting like ` ```json ` before or after the JSON object.\n\nThe JSON object must have a single root key, `stories`, which contains an array of story objects. Each story object must contain the following keys:\n- `title` (string): A catchy, viral-optimized title for the story.\n- `summary` (string): A concise, 1-2 sentence summary explaining the story's hook and why it's compelling for a social media audience.\n- `hook_angles` (array of objects): 2-3 hook angles for opening the video. Each hook object contains:\n - `hook` (string): The actual hook text/opening line\n - `type` (string): The type of hook being used (from the Hook Angle Framework)\n - `rationale` (string): Brief explanation of why this hook works for this story\n- `sources` (array of strings): A list of all consolidated source URLs for the story. These MUST be extracted from the provided context. You may NOT include URLs here that were not found in the provided source context. The url you include in your output MUST be the exact verbatim url that was included in the source material. The value you output MUST be like a copy/paste operation. You MUST extract this url exactly as it appears in the source context, character for character. Treat this as a literal copy-paste operation into the designated output field. Accuracy here is paramount; the extracted value must be identical to the source value for downstream referencing to work. You are strictly forbidden from creating, guessing, modifying, shortening, or completing URLs. If a URL is incomplete or looks incorrect in the source, copy it exactly as it is. Users will click this URL; therefore, it must precisely match the source to potentially function as intended. You cannot make a mistake here.\n```\n\nAfter I get the top 3-5 stories picked out from this prompt, I share those results in slack so I have an easy to follow trail of stories for each news day.\n\n### 4. Loop to generate each script\n\nFor each of the selected top stories, I then continue to the final part of this workflow which is responsible for actually writing the TikTok / IG Reel video scripts. Instead of trying to 1-shot this and generate them all at once, I am iterating over each selected story and writing them one by one.\n\nEach of the selected stories will go through a process like this:\n\n- Start by additional sources from the story URLs to get more context and primary source material\n- Feeds the full story context into a viral script writing prompt\n- Generates multiple different hook options for me to later pick from\n- Creates two different 50-60 second scripts optimized for talking-head style videos (so I can pick out when one is most compelling)\n- Uses examples of previously successful scripts to maintain consistent style and format\n- Shares each completed script in Slack for me to review before passing off to the video editor.\n\n**Script Writing Prompt**\n\n```jsx\nYou are a viral short-form video scriptwriter for David Roberts, host of \"The Recap.\"\n\nFollow the workflow below **each run** to produce two 50-60-second scripts (140-160 words).\n\nBefore you write your final output, I want you to closely review each of the provided `REFERENCE_SCRIPTS` and think deeploy about what makes them great. Each script that you output must be considered a great script.\n\n────────────────────────────────────────\n\nSTEP 1 – Ideate\n\n• Generate **five** distinct hook sentences (≤ 12 words each) drawn from the STORY_CONTEXT.\n\nSTEP 2 – Reflect & Choose\n\n• Compare hooks for stopping power, clarity, curiosity.\n\n• Select the **two strongest hooks** (label TOP HOOK 1 and TOP HOOK 2).\n\n• Do not reveal the reflection—only output the winners.\n\nSTEP 3 – Write Two Scripts\n\nFor each top hook, craft **one flowing script** ≈ 55 seconds (140-160 words).\n\nStructure (no internal labels):\n\n– Open with the chosen hook.\n\n– One-sentence explainer.\n\n– **5-7** rapid wow-facts / numbers / analogies.\n\n– **2-3** sentences on why it matters or possible risk.\n\n– **Final line = a single CTA**\n\n• Ask viewers to comment with a forward-looking question **or**\n\n• Invite them to follow The Recap for more AI updates.\n\nStyle: confident insider, plain English, light attitude; active voice, present tense; mostly ≤ 12-word sentences; explain unavoidable jargon in ≤ 3 words.\n\nOPTIONAL POWER-UPS (use when natural)\n\n• Authority bump – Cite a notable person or org early for credibility.\n\n• Hook spice – Pair an eye-opening number with a bold consequence.\n\n• Then-vs-Now snapshot – Contrast past vs present to dramatize change.\n\n• Stat escalation – List comparable figures in rising or falling order.\n\n• Real-world fallout – Include 1-3 niche impact stats to ground the story.\n\n• Zoom-out line – Add one sentence framing the story as a systemic shift.\n\n• CTA variety – If using a comment CTA, pose a provocative question tied to stakes.\n\n• Rhythm check – Sprinkle a few 3-5-word sentences for punch.\n\nOUTPUT FORMAT (return exactly this—no extra commentary, no hashtags)\n\n1. HOOK OPTIONS\n \n • Hook 1\n \n • Hook 2\n \n • Hook 3\n \n • Hook 4\n \n • Hook 5\n \n2. TOP HOOK 1 SCRIPT\n \n [finished 140-160-word script]\n \n3. TOP HOOK 2 SCRIPT\n \n [finished 140-160-word script]\n\nREFERENCE_SCRIPTS\n\n<Pass in example scripts that you want to follow and the news content loaded from before>\n```\n\n### 5. Extending this workflow to automate further\n\nSo right now my process for creating the final video is semi-automated with human in the loop step that involves us copying the output of this automation into other tools like HeyGen to generate the talking avatar using the final script and then handing that over to my video editor to add in the b-roll footage that appears on the top part of each short form video.\n\nMy plan is to automate this further over time by adding another human-in-the-loop step at the end to pick out the script we want to go forward with → Using another prompt that will be responsible for coming up with good b-roll ideas at certain timestamps in the script → use a videogen model to generate that b-roll → finally stitching it all together with json2video.\n\nDepending on your workflow and other constraints, It is really up to you how far you want to automate each of these steps.\n\n## Workflow Link + Other Resources\n\n- YouTube video that walks through this workflow step-by-step: https://www.youtube.com/watch?v=7WsmUlbyjMM\n- The full n8n workflow, which you can copy and paste directly into your instance, is on GitHub here: https://github.com/lucaswalter/n8n-ai-workflows/blob/main/short_form_video_script_generator.json\n\nAlso wanted to share that my team and I run a free Skool community called [AI Automation Mastery](https://www.skool.com/ai-automation-mastery-group) where we build and share the automations we are working on. Would love to have you as a part of it if you are interested!","output":"Top Answer:\nHaha I wanted to come shit on your post and say \"show me your account or I call BS\" but low and behold, you provided a link in the 5th word of the post.\n\n\nCongrats!","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1loafvx/i_built_this_ai_automation_to_write_viral/"}
|
| 10 |
+
{"instruction":"[Context: r/n8n] 10 things I wish I knew before diving into AI automation (after building 100+ workflows)\n\nBeen deep in the automation game for the past year - here's what actually matters vs. what everyone talks about:\n\n**1. Start stupidly simple** Your first automation should take 10 minutes, not 10 hours. I wasted weeks on complex builds when a simple \"new email → Slack notification\" would've taught me more.\n\n**2. Document your builds publicly** Every automation you create is potential content. Screenshots, learnings, failures - it all becomes proof of expertise. I get more clients from sharing my process than from perfect demos.\n\n**3. Master the HTTP Request node first** Seriously. Half the \"limitations\" people complain about disappear when you can build custom API calls. It's your Swiss Army knife for everything the built-in nodes can't handle.\n\n**4. Stop calling yourself an \"automation expert\"** Everyone says that. Instead: \"I help \\[specific industry\\] eliminate \\[specific pain point\\].\" Specificity attracts premium clients who have that exact problem.\n\n**5. Your biggest wins come from saying no** Turned down a $500 project last month because it wasn't aligned with my positioning. Client came back two weeks later with a $3K project that was perfect fit. Boundaries create value.\n\n**6. Error handling is where amateurs get exposed** Everyone shows the happy path. Pros build for when APIs go down, data formats change, or users input garbage. Plan for chaos.\n\n**7. Share your failures, not just successes** \"Here's how I broke a client's workflow and what I learned\" gets way more engagement than \"Look at this perfect automation.\" Vulnerability builds trust.\n\n**8. The money is in ongoing optimization, not one-time builds** Clients pay once for setup, monthly for \"make it work better.\" Maintenance contracts beat project work every time.\n\n**9. Your network determines your net worth** Other automators become referral sources, not competition. Help people in communities, share knowledge freely. Half my clients come from automator referrals now.\n\n**10. Build your own systems first** Nothing proves automation expertise like having your own lead generation, content creation, and client onboarding automated. Practice what you preach.\n\n**Bonus insight**: The automators making real money talk about business outcomes, not technical features. \"Saved 15 hours/week\" beats \"Built a 47-node workflow\" every time.\n\nWhat's your biggest automation learning curve? Always curious what trips people up vs. what clicks immediately.","output":"Top Answer:\nThis list is solid AF, dude! Number 3 about mastering HTTP node first is the real MVP - seeing this mistake constantly. I've built scrapers that hit 1000+ websites with just that node and proper error handling.\n\nThat error handling point (#6) is where the pros separate from amateurs. Had a client workflow break at 2am because their API changed formats - now I build every system assuming APIs will fail, because they always do eventually lol.\n\nThe business outcomes focus is where the real money is. Built a system for a client that went from $5M to $15M+ annual revenue - they dgaf about my 47-node workflow, they care about the 30+ hours/week I saved their team.\n\nOne thing I'd add: \\*\\*Build measurable workflows from day one\\*\\*. Track every automation with before/after metrics - makes selling the next project 10x easier when you can say \"last time we cut processing time by 84%.\"\n\nThe \"document publicly\" advice is gold too. Half my clients come from showing my process in Discord/Reddit groups and helping people solve problems for free. Reputation > marketing every time bro.","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1ldvy4n/10_things_i_wish_i_knew_before_diving_into_ai/"}
|
| 11 |
+
{"instruction":"[Context: r/n8n] N8N the best tool for small business\n\nI run a small real estate agency and I’m the company nerd or at least I’m the only one who can touch technology without making damage.\n3 months ago I stumbled across N8N, don’t even remember how..\nAnyways since then our productivity grew so much and we process leads so much faster now.\nPlus I have been building all sorts of agents for my team (I keep the agents simple and to serve a specific purpose), database automation, contract writers, background investigations, and more..I also wrote a decent knowledge base and I have clients chatting with it and work with the agents until they’re qualified or ask for human contact. It basically reduced workload by 80% for the sales department.\n\nI can really automate anything, this software is amazing.\n\nJust today I setup an agent that reach out to users on my website who leave their email to download an investing guide and the agent knows what the user have been reading, find something in his knowledge that adds value to the user and then craft an email designed to spark conversation with the lead.\nI really look forward to see if this gimmick alone increases conversion.\n\nAnd nothing.. I’m just a happy business owner wondering if even my job will be taken over by ai agents at some point.\nNice but scary, but mostly nice.\n\nKeep building.\nLove this community.\n\n\n","output":"Top Answer:\nI’m just curious with all this “lead automation” if anyone is actually seeing new clients from these “leads” feels like as AI use increases globally, these leads will become more and more diluted/saturated with AI on the other end too (less likely to contact an actual human that will look at your info)\n\nPerhaps it’s just the future of lead generation, no more human touching it because it’s become a massive time sink of garbage in garbage out- few actual prospects.","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1jwspka/n8n_the_best_tool_for_small_business/"}
|
| 12 |
+
{"instruction":"[Context: r/n8n] The only way to make $5000 per month with N8N\n\nDo your job well. \n\n\\_\\_\\_\n\nI see a lot of people **frustrated** about content in social media that tells them that everyone can make EASY money with N8N and shares their templates and courses - you start to feel like you're missing something when everyone around you is successful.\n\nThese people **are lying** to get their own benefits from AI trends - they make money on content/education, not on real projects. Most of them never **tried** to build something that actually works or acquire real clients.\n\nMost of the templates are just pieces of crap, stolen three times over.\n\nThat’s why people jump into the real world after their courses and can’t make even a penny with the knowledge they’ve acquired.\n\nMany people teach how to sell solutions, not how to build them. As a result, the market is full of crappy agencies with zero-experience people trying to trick clients and make junk that never works.\n\nI spent 5 years among such agencies and saw hundreds of thousands of dollars spent on solutions that never made it to production.\n\n\n\n**So, how do you do real stuff and make money without pushy sales techniques?**\n\nI’ve made $5k per month for the last 3 years in a country where the average salary is $800.\n\nDo I do sales? No. Cold outreach? No.\n\nUpwork? Not anymore, I was banned.\n\nSo, where do I get most of my clients? **Relationships.**\n\nPeople trust people, not ads.\n\n\n\n**How do you build relationships from scratch?**\n\nGet your first projects for free to gain experience and meet new people. Help others in communities, whether you know the answer or not. \n\nContent is a part of building relationships, because through your content people get to know you and feel closer to you as a person. Choose one social media platform and share your knowledge, your cases, and interesting finds from the internet.\n\nYou don’t need a lot - in reality, you just need to do 1-3 projects really well and build relationships with those clients. If you make them money with your solution, they will come back to you over and over again. Half of my clients come back even after years because they know I can provide quality solutions and really help them.\n\nOne good, proactive client can supply you with dozens of projects so you’ll never need to spend time acquiring other clients - this is how many companies work in totally different niches for decades.\n\n**How do you provide good quality?**\n\nWork hard. Spend more time learning new tech and improving your quality rather than selling. Do audits of projects you’ve completed to find bugs. Focus on the long term and ignore the hype.\n\nIf you want to make more money, you can always start transforming your freelance work into a business. Hire additional people and teach them how to do it well. Attract more clients while maintaining high quality.\n\n\n\n**Be honest, be smart, and care about people and your job -** this is the only way to make $5k per month with N8N.","output":"Top Answer:\nRelationships are everything","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1ky8gnh/the_only_way_to_make_5000_per_month_with_n8n/"}
|
| 13 |
+
{"instruction":"[Context: r/n8n] Most people are only using ChatGPT at about 10% of its potential.\n\nI came across a solid “Beginner → Pro Prompting Cheat Sheet” that breaks down: \n- The most common mistakes people make when crafting prompts \n- Useful commands like: list, summarize, elaborate, pros & cons \n- Prompt structures (TREF, SCET, ROSES, etc.) \n- Parameters like temperature, frequency penalty, and stop words \n- How different tones (professional, empathetic, inspirational) change results \n\nHonestly, just applying the “prompt structures” section made my ChatGPT outputs 10x more reliable. \n\nHere’s the sheet (image attached). Curious — what’s your **go-to prompt trick** that consistently gives you better results?","output":"Top Answer:\nlol this post has the same vibe as those \"people only use 10% of their brain\" statements which is just exaggeration","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1n6hafn/most_people_are_only_using_chatgpt_at_about_10_of/"}
|
| 14 |
+
{"instruction":"[Context: r/n8n] All of N8N workflows I could find (1000+) 😋 enjoy !\n\n# I created a script to download all the n8n workflows from the n8n website so I could use them locally, I added all the workflows I could find on git too, so I made a repo with 1000+ workflows for myself but if it benefits others why not... so have fun feel free to start and use whenever you need. I will add more in a few weeks :) meanwhile enjoy those if it helps anyone\n\ndisclaimer : I didn't create any of those workflows. use at your own risk. check them.\n\n[https://github.com/Zie619/n8n-workflows](https://github.com/Zie619/n8n-workflows)\n\n","output":"Top Answer:\nNow make n8n flow that detects when new flows are pushed, downloads and automatically pushes them to github :)","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1kx9u01/all_of_n8n_workflows_i_could_find_1000_enjoy/"}
|
| 15 |
+
{"instruction":"[Context: r/n8n] Why I Left n8n for Python (And Why It Was the Best Decision for My Projects)\n\nHey everyone,\n\nI wanted to share my experience moving away from n8n and why I decided to switch fully to Python for all my automation needs. Hopefully, this post helps anyone considering their options or running into similar frustrations!\n\n**Background: My Start With n8n**\n\nI first discovered n8n in January 2024. At that time, I already had a solid foundation in Python, which made picking up n8n’s visual workflow builder relatively easy. The initial learning curve wasn’t too steep, and I was quickly able to put together useful automations for myself and some freelance clients.\n\n**From Hobby to Business**\n\nAfter a few months, I started offering automation services to others. As I built more complex systems, I began to notice some persistent issues with n8n that started holding me back, especially as my workflows became more advanced and business-critical.\n\n**Don’t get me wrong, n8n is a great tool, and I’m not here to trash it.** \nFor many people and many use cases, it’s a fantastic way to automate repetitive tasks and integrate different apps without having to write code. It’s open-source, self-hostable, and has a vibrant community behind it. If you’re looking to automate simple workflows, connect web services, or just want a visual way to build automations, n8n does the job really well.\n\n**But here’s the thing:** \nn8n isn’t the “everything tool” that some people make it out to be. There’s a narrative out there that no-code tools like n8n can replace traditional programming for any task, but that just isn’t true, especially when your automations start getting complicated, need to scale, or require advanced logic.\n\n**The Limitations I Encountered With n8n**\n\nHere are some of the main challenges I faced:\n\n* **File Handling:** n8n is not great at dealing with files, especially when you need to process, move, or transform large or multiple files. The built-in nodes often fell short, and workarounds became too hacky or unreliable for my liking.\n* **Performance Issues:** As my workflows grew in size and complexity, n8n started to lag. Large workflows would slow down or fail unpredictably, and scaling was a real challenge. This is a huge issue when you're trying to deliver robust, professional-grade solutions.\n* **Debugging:** Debugging in n8n can be quite painful. The visual interface makes simple workflows easy to follow, but once things get more complicated, it’s difficult to pinpoint exactly where things are going wrong, especially with more advanced logic or error handling.\n* **Tool/Node Limitations:** Sometimes the functionality I needed just wasn’t available in n8n, or required a ton of awkward workarounds. You’re limited to the nodes and options provided by the platform, which can stifle creativity and flexibility.\n* **Reliable AI Agents:** n8n struggles when you need to build truly reliable AI agents. While you can connect to AI APIs easily, managing complex logic, persistent state, and robust error handling for AI-powered workflows is difficult. For anything beyond basic AI use cases, you’ll quickly run into reliability issues and limitations.\n\n**In summary:**\n\n* n8n is excellent for prototyping, MVPs, or connecting services quickly.\n* For more complex, large-scale, or mission-critical automations, I kept running into its limits—performance, debugging, and custom logic being the big ones.\n* Python (or any full programming language) opens up a whole new world of possibilities that n8n just isn’t built to handle.\n\n**Switching to Python: Game Changer**\n\nAfter hitting these walls over and over, I decided to dive back into Python and started rewriting my automation projects from scratch. Honestly, it was one of the best decisions I’ve made for my workflow and my business. Here’s why:\n\n* I was able to create far more professional, scalable, and maintainable systems.\n* There are no arbitrary limits, if I can think of it, I can probably build it.\n* Debugging is straightforward, especially with all the tools and libraries available for Python development.\n* I can handle files, APIs, data processing, and even machine learning, all in one place.\n\n**Advice: Hybrid Approach**\n\nIf you’re not ready to go “all in” with Python, there’s always the hybrid route: orchestrate the general workflow in n8n and use Python scripts for the heavy lifting. This can give you the best of both worlds and ease the transition.","output":"Top Answer:\nThis resonates a lot. I went through almost the exact same progression. n8n was incredible for fast prototyping, but once things got business-critical (especially around performance, file handling, and AI logic), I kept running into walls.\n\nSwitching to Python for the core logic was a total unlock. I still sometimes use n8n as an orchestrator, but now all the heavy lifting runs in clean, scalable Python services. Great write-up, more people need to hear this part of the journey too.","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1mcm9d2/why_i_left_n8n_for_python_and_why_it_was_the_best/"}
|
| 16 |
+
{"instruction":"[Context: r/n8n] I built a free browser extension that builds your n8n workflows for you, and it’s out now! (Chrome and Firefox)\n\n","output":"Top Answer:\nSame question as others have had, will you be publishing it to a guthub repo? That way we can have a look at the code to make sure it’s not doing anything suspect like taking our credentials?","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1jzpj2u/i_built_a_free_browser_extension_that_builds_your/"}
|
| 17 |
+
{"instruction":"[Context: r/n8n] I built a 24/7 AI Receptionist with n8n so our local restaurant never misses a call again.\n\nRestaurants miss a lot of calls, especially during peak hours. That's a ton of lost business. To fix this, I built a fully automated AI Receptionist using n8n that runs 24/7 and never misses a call.\n\nHere’s the simple version of how it works:\n\n* **AI Answers the Phone:** When a customer calls, a voice AI from **Vapi** picks up, ready to help.\n* **Understands the Request:** It can answer basic questions (hours, location) or handle a reservation request.\n* **Books the Table:** The AI asks for the necessary details like name, party size, date, and time.\n* **Confirms & Notifies:** Once the details are captured, the n8n workflow instantly:\n * Confirms the booking with the customer on the call.\n * Sends both an **SMS** and **Email** confirmation.\n * Adds the event to the restaurant's calendar.\n * Logs everything in Google Sheets and a database.\n\nThe entire process is hands-free for the staff. It's a simple solution to a costly problem, all powered by n8n.\n\n🔗 **Workflow (public):** [https://drive.google.com/file/d/1uSsWaUedA3\\_kSsREcAjx\\_73dmBlF05p5/view?usp=sharing](https://drive.google.com/file/d/1uSsWaUedA3_kSsREcAjx_73dmBlF05p5/view?usp=sharing) \n \n➡️ If you found this helpful, please upvote and follow for more n8n templates, happy to connect with you.","output":"Top Answer:\nVery interesting to see Groq. Why groq? What model are you using? Fascinating tbh","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1nfx1uj/i_built_a_247_ai_receptionist_with_n8n_so_our/"}
|
| 18 |
+
{"instruction":"[Context: r/n8n] I Built a Personal AI Assistant That Runs My Life Through WhatsApp, Powered by n8n and a Self-Hosted LLM\n\nHey everyone,\n\nI wanted to share a project I've been working on to finally stop switching between a dozen apps to manage my day. I've built a personal AI assistant that I interact with entirely through WhatsApp, with [n8n.io](http://n8n.io) as the backbone. \n**Here’s a quick look at what it can do (with real examples):**\n\n* **Manages My Bills:** I can forward it a message with my credit card due dates. It parses the text, totals the bill amounts, and automatically sets reminders in my calendar 2 days before each payment is due.\n* **Keeps My Schedule:** I can say, \"Remind me by eve to hit the gym,\" and it adds it to my Google Calendar and sends me a reminder notification.\n* **Summarizes My Inbox:** Instead of doomscrolling through emails, I ask, \"check do I have any important mail today?\" and it gives me a clean, bulleted list of important subjects and senders.\n* **Understands Images (OCR):** I snapped a photo of a delivery address, and it extracted all the text, identified the pincode, state, and other details. Super useful for quickly saving info without typing.\n* **Acts as a Music DJ:** It can suggest playlists for any mood or task. When I asked for Ilaiyaraaja songs for work, it gave me a curated list and then created a YouTube playlist for me on command.\n\n**The Tech Setup (The Fun Part):**\n\nThe real magic is the workflow I built in **n8n** (snapshot attached). It orchestrates everything:\n\n* **Entry Point:** A WhatsApp trigger node kicks everything off.\n* **Central AI Brain:** A primary AI node receives the message and figures out what I want to do (my \"intent\").\n* **Delegation to Specialized Agents:** Based on the intent, it passes the task to a specific sub-workflow.\n * **Calendar/Task Agents:** These are straightforward nodes that connect directly to Google Calendar and Tasks APIs to create, get, or update events.\n * **Research Agent:** This is my favorite part. To avoid hallucinations and get current information, this agent doesn't just rely on a generic LLM. It's configured to query **Wikipedia** and my own **self-hosted Perplexity instance (**Perplexica is an open-source AI-powered searching tool**)** running on a private VM. This gives it reliable and up-to-the-minute data for my queries.\n * **Image Analysis:** For images, it calls an external API to perform OCR, then feeds the extracted text back to the main AI for interpretation.\n\nIt's been an incredibly powerful way to create a single, conversational interface for my digital life. The fact that I can host the core logic myself with n8n and even the research LLM makes it even better.\n\nWhat do you all think? Any other cool features I should consider adding to the workflow? Happy to answer any questions about the setup","output":"Top Answer:\nCan you share the code for this setup?","source":"reddit_n8n","url":"https://www.reddit.com/r/n8n/comments/1mtv9re/i_built_a_personal_ai_assistant_that_runs_my_life/"}
|
| 19 |
+
{"instruction":"[Context: r/laravel] FilaForms - Native Filament form builder I built (visual builder, submissions, notifications, analytics)\n\nAfter years of rebuilding contact forms, newsletter signups, and application forms for every single Laravel project, I finally snapped and built a proper solution.\n\n[FilaForms](https://filaforms.app) \\- A Filament plugin that handles ALL your public-facing forms in one go.\n\n# The Problem It Solves\n\nEvery Laravel app needs forms that visitors fill out. Contact forms, job applications, surveys, newsletter signups - we build these over and over. Each time writing validation, handling file uploads, setting up email notifications, building submission dashboards, adding CSV exports...\n\n# What I Built\n\nA native Filament plugin that gives you:\n\n* **Visual form builder** with 25+ field types (including list-items, ratings, file uploads)\n* **Drag & drop interface** \\- no code needed for form creation\n* **Submission management dashboard** built into the Filament admin\n* **Built-in analytics** to see how your forms perform\n* **Conditional logic & multi-step forms** for complex workflows\n* **Auto-responders & email/in-app notifications** with customizable templates\n* **CSV/Excel exports** with bulk operations\n* **Progress saving** so users don't lose partially filled forms (coming soon)\n\n# The Technical Bits\n\n* It's pure Filament components under the hood (no iframes, no external JS libraries)\n* Self-hosted on your infrastructure - your data stays yours\n* Works with existing Filament panels and Livewire apps\n* Integrates with your current authorization\n\n# Some Background\n\nI've been contributing to the Filament ecosystem for a while (you might know Relaticle CRM, FlowForge, or Custom Fields). This is solving a problem I've personally faced in every Laravel project.\n\n**Link:** [filaforms.app](https://filaforms.app)\n\nHappy to answer any questions about the implementation, architecture decisions, or specific use cases. Also really interested in what types of forms you're all building most often - always looking for edge cases to handle better.","output":"Top Answer:\nOkay FilaForms day 1 was wild. \n\nYou all really stress-tested this thing. Fixed a bunch of issues, pushed updates, and it's way more stable now.\n\nThanks for being patient with the beta rough edges.\n\n(grab early bird pricing while I'm still fixing things 😅)","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1nomon2/filaforms_native_filament_form_builder_i_built/"}
|
| 20 |
+
{"instruction":"[Context: r/laravel] NativePHP finally goes truly native\n\n","output":"Top Answer:\nTruly Native as in it compiles your PHP code into Kotlin or Swift depending on the platform ? Or is it like Cordova still a webpage that uses native plugins for a few functionalities like share buttons ?\n\nIf it's the latter, it's fine but we really need to stop calling these hybrid apps \"Native\", because then how do you call an actual native app ?","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1i8p984/nativephp_finally_goes_truly_native/"}
|
| 21 |
+
{"instruction":"[Context: r/laravel] Laravel: When you're the entire dev team and still ship faster\n\nSaw this on LinkedIn, too relatable not to share.\n\n","output":"Top Answer:\nThis is true of most things. When one person is working on something, they own everything. They have the working knowledge of the entire codebase. You add more people, and they lose sight of that. They bring a different code style, flow, ideas, patterns, etc., that you may or may not agree with. They may have other repos and projects they work on, they may not be reviewing every PR that comes in, so they lose that insight. It's not necessarily a bad thing, but that's just how it goes 99% of the time.","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1k1dh9h/laravel_when_youre_the_entire_dev_team_and_still/"}
|
| 22 |
+
{"instruction":"[Context: r/laravel] Custom Fields v2.0 - Major Update for Filament Apps\n\n# Just shipped: Option Colors & Conditional Visibility 🎉\n\nAfter months of development, I'm excited to share Custom Fields v2.0 - a significant update to our Filament package that lets you add dynamic custom fields without database migrations.\n\n**What's New in v2.0:**\n\n**🌈 Option Colors**\n\n* Add visual color coding to select fields and radio buttons\n* Perfect for status fields, priority levels, and categories\n* Clients love the visual clarity it brings to their data\n\n**👁️ Conditional Visibility**\n\n* Show/hide fields based on other field values\n* Create smart, adaptive forms that respond to user input\n* No more cluttered forms - only show what's relevant\n\n# Why This Matters:\n\nAs Laravel developers, we've all been there - client wants \"just a few custom fields\" and suddenly you're writing migrations, updating models, creating form components, and spending days on what should be simple changes.\n\nCustom Fields eliminates this pain entirely. Your clients can create their own fields through the admin panel, and when requirements change (they always do), you respond in minutes, not sprints.\n\n# Technical Highlights:\n\n* **Zero database changes** \\- Everything stored as JSON\n* **Type safety** \\- Full validation and casting support\n* **Seamless integration** \\- Works with existing Filament resources\n* **Performance optimized** \\- Efficient querying and caching\n\n# Field Types Supported:\n\nText, Number, Textarea, Rich Editor, Select, Multi-select, Radio, Checkbox, Date/DateTime, Color Picker, Tags, Toggle, Currency, Link, Markdown Editor, and more.\n\n# Real Developer Feedback:\n\n*\"Cut our development time by 50% and our clients love being able to create exactly what they need without waiting for us to code it.\"*\n\n\"I've tried building custom field functionality myself three times. This package does everything I needed and more, right out of the box.\"\n\n# Coming Soon:\n\nPlanning to open source this package - want to give back to the Laravel community that has given me so much.\n\n# Questions Welcome:\n\nHappy to answer any technical questions about implementation, performance, or use cases. Always looking for feedback from fellow Laravel developers!\n\n**Stack:** Laravel 12+, Filament 3+, PHP 8.2+ \n\n**Live Demo:** [https://relaticle.com/](https://relaticle.com/)\n\n**Documentation**: [https://custom-fields.relaticle.com/introduction](https://custom-fields.relaticle.com/introduction)\n\nWhat do you think? Anyone else working on similar solutions for dynamic fields?","output":"Top Answer:\nhttps://preview.redd.it/senobilyfubf1.png?width=3680&format=png&auto=webp&s=19fec6254754166a1f669929e0754ac4d6c073c3\n\nCustom Fields now works with ALL Laravel models, not just Filament resources! \n \nOne API, infinite possibilities! \n\n🔓 Open Source Details:\n\nWill be AGPL 3.0 + Commercial License:\n\n• Open source projects → FREE ✅\n\n• Non-profits & education → FREE ✅ \n\n• Commercial closed-source → License required\n\nSame model as MySQL, MongoDB. Community first, sustainability second! 💚","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1lvhf1y/custom_fields_v20_major_update_for_filament_apps/"}
|
| 23 |
+
{"instruction":"[Context: r/laravel] I hate to admit this, but Laravel Cloud is nowhere near production-ready\n\nI moved my app from DigitalOcean droplet(6$) to Laravel Cloud (~80$), a couple of weeks after it was released, and I hate to admit this but I wish I didn’t do that. I was ready to pay more money, thinking that I won’t have to care about downtimes anymore, but it’s actually the opposite.\n\n- Random outages, sometimes up to 20 minutes\n- Support replying 24 hours later, no matter the urgency of the issue\n- Requests avg. spiking from 200ms to 20 seconds for periods of hours\n\nDon’t get me wrong, Laravel team is awesome, and their products are top-tier, but I wish they’d admit that Cloud is just not prod-ready yet, so developers can make informed choices.","output":"Top Answer:\ntbh no one really needs an expensive cloud architecture unless the website has really high loads / much traffic.. go with [ploi.io](http://ploi.io), cloudflare and an appropriate vps.. we have 76,45k unique users per month that are doing 7,31M requests and we pay 50€ per month with this setup.. Laravel Cloud is nothing more than an overpriced wrapper around AWS EC2","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1khvj50/i_hate_to_admit_this_but_laravel_cloud_is_nowhere/"}
|
| 24 |
+
{"instruction":"[Context: r/laravel] Non-Volt Livewire starter kit now available\n\nHey all - dropped a non-Volt flavor of the Livewire starter kit for you. \n \n[https://x.com/taylorotwell/status/1895584390580957337](https://x.com/taylorotwell/status/1895584390580957337)","output":"Top Answer:\nBabe wake up, Taylor’s back on Reddit","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1j0kx3w/nonvolt_livewire_starter_kit_now_available/"}
|
| 25 |
+
{"instruction":"[Context: r/laravel] New Laravel website. First impressions.\n\nFirst impression ? Bad. \nAfter re-evaluation? Fu\\*king horrible. \n\nHijacked scroll, you need to scroll 5 times to move out of a section. \nPage down to navigate? Good luck, you will \"miss\" information that's only visible after you \"scroll\" a specific section of the page. \n\n \nMobile ? I am not even going to start here. \n\nDisc: This is my opinion and does not reflect the opinion of any of my peers. ","output":"Top Answer:\n[removed]","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1ix4g5k/new_laravel_website_first_impressions/"}
|
| 26 |
+
{"instruction":"[Context: r/laravel] The Laravel Idea Plugin is Now Free for PhpStorm Users | The PhpStorm Blog\n\n","output":"Top Answer:\nOne less subscription","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1mdevj1/the_laravel_idea_plugin_is_now_free_for_phpstorm/"}
|
| 27 |
+
{"instruction":"[Context: r/laravel] FilaForms — public form builder plugin for FilamentPHP [Black Friday: 30% off]\n\nI got tired of rebuilding form infrastructure on every project.\n\nContact forms, feedback forms, surveys — each time writing migrations, validation, submission handling, notifications...\n\nSo I built FilaForms: a Filament plugin with a visual form builder, submission tracking, and analytics.\n\nOne-time purchase. Self-hosted. No subscriptions.\n\n\\*\\*30% off through Monday for Black Friday.\\*\\*\n\nHere's a quick demo. Happy to answer questions.\n\n[https://filaforms.app](https://filaforms.app)","output":"Top Answer:\nBought the lifetime license when you dropped this and still a big fan. \n\nAny news on the standalone components you teasered in the docs?","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1p69i41/filaforms_public_form_builder_plugin_for/"}
|
| 28 |
+
{"instruction":"[Context: r/laravel] Vemto 2 now is Open-Source under MIT license\n\nHi everyone! \n \nI'm delighted to announce that Vemto is now a fully open-source project, published under the MIT license.\n\nA few months ago, I [wrote a blog post explaining](https://vemto.app/blog/the-future-of-vemto) in detail why Vemto would become open-source.\n\nAt the time, I wasn't sure which license to use, but after talking to countless users and customers, I ended up opting for the MIT license.\n\nIt took me a while to prepare this repository, for personal reasons, but I finally managed to increase the number of tests to over 400, covering at least the most critical parts of the application, and I will continue adding more tests overtime.\n\nI also managed to finish writing much of the [internal development documentation](https://github.com/VemtoOrg/vemto2/blob/main/docs/index.md).\n\nI hope you enjoy it, and if you have any questions, please email me at [contact@vemto.app](mailto:contact@vemto.app).\n\nBy [Tiago Rodrigues](https://x.com/Tiago_Ferat)","output":"Top Answer:\nGlad to see this is at least going open source and not disappearing. I've been following your journey with this and while it's disappointing that you weren't able to get the support you needed, I'm hopeful that this takes off as an open source tool. I think it's great and the hard work you put into it really shows.","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1noun0s/vemto_2_now_is_opensource_under_mit_license/"}
|
| 29 |
+
{"instruction":"[Context: r/laravel] [Open Source] Custom Fields for Filament - Add dynamic fields to any model without migrations\n\nHey r/Laravel! 👋\n\nI've just open-sourced **Custom Fields**, a Filament plugin that lets you add unlimited dynamic fields to any Eloquent model without writing migrations. After months of development and testing, I decided to give it back to the community under AGPL-3.0 + Commercial.\n\n# The Problem We've All Faced\n\nHow many times have you been asked: *\"Can we just add one more field to track employee count?\"*\n\nEach request traditionally means:\n\n* Writing a migration\n* Updating your model\n* Modifying form/table schemas\n* Testing database changes\n* Coordinating deployments\n\nWhat if your users could add their own fields instead?\n\n# The Solution\n\nCustom Fields eliminates the migration cycle entirely. Your users can add unlimited custom fields through the admin panel without any developer intervention.\n\n# Implementation (2 steps):\n\n // 1. Add to your model\n use Relaticle\\CustomFields\\Models\\Contracts\\HasCustomFields;\n use Relaticle\\CustomFields\\Models\\Concerns\\UsesCustomFields;\n \n class Company extends Model implements HasCustomFields\n {\n use UsesCustomFields;\n }\n \n // 2. Add to your Filament resource form\n use Relaticle\\CustomFields\\Filament\\Forms\\Components\\CustomFieldsComponent;\n \n public function form(Form $form): Form\n {\n return $form->schema([\n // Your existing fields...\n TextInput::make('name'),\n TextInput::make('email'),\n \n // Custom fields component\n CustomFieldsComponent::make(),\n ]);\n }\n\nThat's it. No migrations, no database schema changes.\n\n# Key Features\n\n* **18+ Field Types**: Text, number, select, multi-select, rich editor, date picker, color picker, tags, toggles, and more\n* **Zero Database Migrations**: All custom field data is stored in a flexible JSON structure\n* **Multi-tenancy Ready**: Complete tenant isolation and context management\n* **Full Filament Integration**: Works seamlessly with forms, tables, and infolists\n* **Validation Support**: Built-in Laravel validation rules per field type\n* **Import/Export**: CSV capabilities for data management\n* **Conditional Visibility**: Show/hide fields based on other field values (coming soon)\n\n# Technical Implementation\n\nThe package uses a polymorphic relationship pattern with JSON field storage, avoiding the need for dynamic schema changes. All field definitions and values are stored efficiently while maintaining Laravel's Eloquent relationships and query capabilities.\n\nField types are built on top of Filament's native form components, ensuring consistency with your existing admin panel design and behavior.\n\n# Requirements\n\n* **PHP 8.1+**\n* **Laravel 10+**\n* **Filament 3+**\n* Coming soon: **Filament v4 support** (next few weeks)\n\n# Installation\n\n composer require relaticle/custom-fields\n\n# Why Open Source?\n\nThe Laravel community has given me so much over the years. This felt like the right way to give back. The package is production-ready and battle-tested - we've been using it internally for months.\n\n**GitHub**: [https://github.com/Relaticle/custom-fields](https://github.com/Relaticle/custom-fields)\n\nPerfect for SaaS applications, CRM systems, or any project requiring user-configurable data models.\n\nWould love to hear your thoughts and feedback! ⭐\n\n*Built as part of* [*Relaticle*](https://github.com/Relaticle/relaticle)*, an open-source CRM platform.*","output":"Top Answer:\nDude, this is perfect timing! Just got another \\can we add one more field\\ request this morning 😅 Starring this for sure!","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1m1b3ep/open_source_custom_fields_for_filament_add/"}
|
| 30 |
+
{"instruction":"[Context: r/laravel] NativePHP with Inertia and ReactNative\n\nI managed to make the NativePHP iOS early access code work with Inertia in combination with ReactNative. \n\nThis results in (imho) the best of both worlds:\n\n- Truly native UI elements \n- Laravels powerful routing, validation and APIs\n\nJust like a traditional Inertia app, this takes a ReactNative component and passes the props to the component. 🔥","output":"Top Answer:\nDespite all the arguing about the phrasing around “Native”, this project is still really cool, and I’m excited to see where it goes.","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1iamqi2/nativephp_with_inertia_and_reactnative/"}
|
| 31 |
+
{"instruction":"[Context: r/laravel] How I Built a Modular Laravel CRM: Architecture Insights\n\nI wanted to share some architecture insights from building [Relaticle](https://github.com/Relaticle/relaticle), an open-source CRM platform. I hope these observations are helpful if you're building complex Laravel applications.\n\n# Modular Architecture\n\nOne of the most effective decisions was organizing the codebase into modules:\n\n /app # Core application code\n /app-modules # Feature modules \n /Admin\n /config\n /resources\n /routes\n /src\n /Documentation\n /config\n /resources\n /routes\n /src\n /OnboardSeed # For seeding data\n\nEach module functions as a contained unit with its own:\n\n* Routes\n* Views and assets\n* Controllers and services\n* Configurations\n\nThis approach has significantly improved maintainability as features grow.\n\n# Framework & Package Integration\n\nRelaticle leverages several key packages:\n\n* **Filament** for admin interfaces and resource management\n* **Livewire** for interactive components\n* **AlpineJS**: Used for lightweight JavaScript interactions within Blade templates. The declarative syntax keeps our markup clean and understandable.\n* **Laravel Jetstream** for authentication scaffolding\n* **Spatie Laravel Data**: Transforms unstructured data into strongly-typed DTOs. This has been game-changing for maintaining type safety across application boundaries and ensuring reliable data contracts.\n* **Pest PHP**: The expressive syntax makes tests more readable and reduces boilerplate. The plugin ecosystem (particularly Pest Plugin Livewire) streamlines testing complex components.\n* **Laravel Horizon**: For monitoring and configuring Redis queues. Essential for understanding queue throughput and debugging job failures.\n\n# Code Quality & Type Safety\n\nWe've invested heavily in code quality tools that have dramatically improved our development workflow:\n\n* **RectorPHP**: Automates code refactoring and modernization. We use it with multiple rule sets (deadCode, codeQuality, typeDeclarations, privatization, earlyReturn, strictBooleans) to maintain clean, modern PHP code.\n* **PHPStan with Larastan**: Static analysis at level 3 helps catch potential bugs before they reach production.\n* **Pest Type Coverage**: We maintain strict type coverage (>99.6%) across the codebase, which has virtually eliminated type-related bugs.\n* **Laravel Pint**: Ensures consistent code style with zero developer friction.\n\nOur CI pipeline runs these tools on every commit, giving us confidence when adding features or refactoring code.\n\n# Documentation as a Module\n\nThe Documentation module is a good example of the modular approach:\n\n* Standalone module with its own routes and controllers\n* Handles markdown processing\n* Implements search functionality\n* Recently enhanced with proper SEO metadata for each document\n\n# SEO & Metadata Implementation\n\nWe've implemented a consistent approach to metadata across the application:\n\n* Shared layouts (guest.blade.php and app.blade.php) with configurable meta tags\n* Dynamic Open Graph tags that adapt to specific content\n* Page-specific descriptions and titles for better search visibility\n* Flexible fallbacks for default values\n\n# Developer Experience Enhancements\n\nBeyond architecture, we've implemented several DX improvements:\n\n* **Comprehensive Testing**: Using Pest's architecture tests to enforce module boundaries and prevent circular dependencies.\n* **Composable Scripts**: Our composer.json includes specialized scripts for different testing stages (`test:lint`, `test:refactor`, `test:types`, etc.)\n* **Type Coverage Reports**: We generate type coverage reports to identify areas needing \n\n# Challenges Worth Noting\n\n* **Module Boundaries**: Deciding what belongs in core vs. modules requires constant refinement\n* **Consistent Patterns**: Maintaining consistency across modules demands discipline\n* **Documentation**: Keeping documentation in sync with development is an ongoing effort\n* **Type System Edge Cases**: While PHP's type system has improved dramatically, there are still edge cases where types must be handled carefully, particularly with framework-specific types.\n\nI've learned that a well-structured, modular approach pays dividends in maintainability and developer experience, especially as the application grows.\n\nIf you're interested in exploring these patterns or contributing, check out [Relaticle on GitHub](https://github.com/Relaticle/relaticle). We'd appreciate a star ⭐ if you find it valuable!\n\nWhat modular approaches have worked well in your Laravel projects? Would love to hear about your experiences.","output":"Top Answer:\nCan't believe I read the whole post. Cool project OP, star it 👍🏻","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1kli44f/how_i_built_a_modular_laravel_crm_architecture/"}
|
| 32 |
+
{"instruction":"[Context: r/laravel] 🎉 r/Laravel just hit 100,000 members!\n\nFrom small snippets to deep architecture discussions, this community has grown into one of the best places to share packages, give feedback, and push Laravel further.\n\nSome stats for the past 12 months:\n\n- 1.3K Posts\n\n- 27.6K Comments\n\n- 7.5m Visits\n\nThanks to everyone who asks thoughtful questions, shares knowledge, and helps keep things welcoming. Here’s to the next 100k.","output":"Top Answer:\nCongratulations everyone. Let’s keep building the community","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1mqiyjs/rlaravel_just_hit_100000_members/"}
|
| 33 |
+
{"instruction":"[Context: r/laravel] 300+ Laravel tips are now categorized\n\nTons of tips (+300), now categorized as you guys requested! I highly recommend checking out the \"helpers\" and \"validation\" categories, I use most of them daily in Laravel projects, and they can save you a few lines of code (or result in refactors 🤞🏽)\n\n[https://github.com/OussamaMater/Laravel-Tips](https://github.com/OussamaMater/Laravel-Tips) \n","output":"Top Answer:\nThank you!! 🙏🏾","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1i58bhl/300_laravel_tips_are_now_categorized/"}
|
| 34 |
+
{"instruction":"[Context: r/laravel] Got an unexpected Laravel Cloud bill :/\n\nOnly 5m requests in the last 30 days (and its an api, so just json), so I'm not even sure how this has happened.","output":"Top Answer:\nThis is always gonna happen on these sort of cloud services.","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1k414p0/got_an_unexpected_laravel_cloud_bill/"}
|
| 35 |
+
{"instruction":"[Context: r/laravel] Filament v4 is launching on Tuesday, August 12th!\n\nThe FilamentPHP team [announced the release date](https://x.com/filamentphp/status/1953574144098127972) for Filament v4 yesterday. Lots of meaningful improvements for performance, DX, and customization. You should check out the [great overview](https://filamentphp.com/content/leandrocfe-filament-v4-beta-feature-overview) posted by Leandro Ferreira but a few highlights of v4 are:\n\n* Performance: Large tables render 2–3x faster.\n* Tailwind v4: Modernized theming with oklch colors.\n* Auth: Built-in MFA (TOTP apps + email codes).\n* Resources: Nested resources and better organization.\n* Forms: New TipTap rich editor, slider, code editor, table repeater, partial rendering, and JS hooks to reduce network requests.\n* Tables: Custom data sources (no DB required), better bulk actions, reorderable columns.\n* Actions: Unified actions across tables/forms/infolists with rate limiting and better testing.\n* Panels: Strict authorization mode, local Inter font, improved error notifications.\n* Theming: CSS improvements have made theming significantly easier. There are great themes available at [https://filamentthemes.com/](https://filamentthemes.com/) and [https://filafly.com/](https://filafly.com/) as well as many other community options being released on Discord or the [Filament Plugin](https://filamentphp.com/plugins) page.\n\nWhat feature are you most excited to try first? Are you planning to upgrade right away or wait for a while post launch?","output":"Top Answer:\n[removed]","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1mkvmg7/filament_v4_is_launching_on_tuesday_august_12th/"}
|
| 36 |
+
{"instruction":"[Context: r/laravel] Solved my \"one more field\" client nightmare in Filament without migrations - looking for feedback\n\nAfter the fifth time a client asked me to \"just add one more field\" to their Filament admin panel, I got tired of writing migrations, tweaking Resource classes, and deploying for something so simple.\n\nSo I built a solution that's been saving me hours on every project, and I'd love some feedback from fellow Laravel devs who face the same pain.\n\nIt's a Filament plugin that lets you create custom fields through the UI instead of code:\n\n* No more migrations for new fields\n* Fields get automatically rendered in forms and tables\n* Drag-and-drop reordering (clients love this part)\n* All the usual field types (rich text, color pickers, etc.)\n* Normal validation rules still work\n\nI'm especially interested in hearing:\n\n1. What edge cases would you expect to break this approach?\n2. What field types would you need that might be missing?\n3. Any performance concerns with large datasets?\n\nI've been using this in production for several client projects now, and it's been solid so far. \n\nDocumentation is at [custom-fields.relaticle.com](http://custom-fields.relaticle.com) if you're curious about the implementation details.\n\nThanks for any thoughts or feedback!","output":"Top Answer:\nI haven’t used Filament so maybe this is a solved problem, but how would his handle sorting nullable fields? For example, if I have a nullable “budget” column, often my clients want the nulls to be treated as 0 but MySql (and maybe others) will always sort nulls last.","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1kalhrh/solved_my_one_more_field_client_nightmare_in/"}
|
| 37 |
+
{"instruction":"[Context: r/laravel] Flowforge: A Kanban Board Plugin for Laravel Filament (Open-Source)\n\nHey Artisans! I wanted to share a Filament plugin I've been working on called Flowforge. It's a Kanban board package that let's you transform any existing Eloquent model into a beautiful, drag-and-drop board with minimal configuration.\n\n**Why I built it:** I was working on a project management app and needed a simple Kanban for tracking. Couldn't find anything lightweight that worked directly with my existing models without extra tables or complex setup. So I built this!\n\n**What it does:**\n\n* Works with your existing Eloquent models (no extra tables!)\n* Drag-and-drop cards between columns\n* Saves card order automatically when moved\n* Customizable column colors\n* Optional create/edit modals for cards\n* Fully responsive design\n\nThe coolest thing is how quick you can set it up. If you have a model with a status field, you can literally have a working board in 5 minutes. Here's an example:\n\n class TasksBoardPage extends KanbanBoardPage\n {\n public function getSubject(): Builder\n {\n return Task::query();\n }\n \n public function mount(): void\n {\n $this\n ->titleField('title');\n ->columnField('status')\n ->columns([\n 'todo' => 'To Do',\n 'in_progress' => 'In Progress',\n 'completed' => 'Completed',\n ])\n }\n }\n \n\nThat's it! You even get a generator command that scaffolds everything for you.\n\nIt's been super useful for us - our users can now visually manage workflows instead of staring at boring tables all day lol.\n\nThe package is totally open-source and available on GitHub. I'd love to get some feedback, feature ideas, or contributions if anyone's interested. I'm still actively developing it.\n\nCheck it out: [Flowforge on GitHub](https://github.com/Relaticle/flowforge)\n\nAnyone else building cool Filament plugins? Would love to see what your working on!","output":"Top Answer:\nOkay, this one is impressive! You just earned a star! :)","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1k7g8mm/flowforge_a_kanban_board_plugin_for_laravel/"}
|
| 38 |
+
{"instruction":"[Context: r/laravel] Appreciation post for Laravel\n\nIn my 9-5 I am a .NET / React developer. I run a small side gig building web apps for smaller clients where my primary tech stack is Laravel with React + Inertia.\n\nMy developer experience coming from ASP.NET to Laravel is immeasurably better. What would take multiple dev teams in a corporate environment months to build in .NET, I can build in a week or just a few days in Laravel.\n\nNeed a message queue? It’s in the box. \n\nNeed real-time communication with your frontend? In the box. \n\nDon’t want to duplicate your validation rules in your frontend and backend? Laravel has it. \n\nNeed an events system, mail service, notifications pattern? Just read the docs.\n\nI love Laravel because they champion what’s new and innovative in the open source community. The documentation is outstanding, the community has tons of resources and is generally focused on making the framework as powerful as possible for us.\n\nI hope adoption at the enterprise & startup levels increases, because this framework is doing so much more than the others. ","output":"Top Answer:\nLaravel is such an amazing framework. We're trying to pick a framework for a large project at work and we're down to Laravel - which I've been championing - and NestJS. To me, Laravel seems the clear choice. Clear as in \"there's Laravel, and then there's the wrong choice.\" It will be \\*so\\* much easier for us to build in, would have far less decision fatigue, and for our scale (hundreds of thousands of users) it would work perfectly. \n\nAnd Nest is fine. It certainly seems workable, it just looks like a lot more work for probably worse results. \n\nAdvocating for PHP is an uphill fight, though. I'll likely lose out.","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1prdyb0/appreciation_post_for_laravel/"}
|
| 39 |
+
{"instruction":"[Context: r/laravel] NativePHP going truly native.. for real-real!\n\n","output":"Top Answer:\nI have been very skeptical of this project in the past, but I have to say that it is looking better and better! You might actually win me over soon. \n\nI also hated the name as it was not native at all before. But now it looks like you are actually getting there. Great job so far! 🚀","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1od4yf4/nativephp_going_truly_native_for_realreal/"}
|
| 40 |
+
{"instruction":"[Context: r/laravel] Laravel Boost has officially released!\n\n","output":"Top Answer:\nLove the fact that Filament is included (and Nova isn’t somehow?).","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1mp91s8/laravel_boost_has_officially_released/"}
|
| 41 |
+
{"instruction":"[Context: r/laravel] I've Curated a List of 30+ Large Laravel/PHP Projects\n\nHello guys,\n\nI realized that Laravel/PHP have a brand/showcase problem (*had a few videos/tweets about it*), so decided to collect Laravel-based projects (focusing on LARGE ones) with stories of real people talking about them.\n\nSo, here's a public GitHub repository: \n[https://github.com/LaravelDaily/Large-Laravel-PHP-Project-Examples](https://github.com/LaravelDaily/Large-Laravel-PHP-Project-Examples) \n\nhttps://preview.redd.it/pe1vivtrbhzf1.png?width=1918&format=png&auto=webp&s=2e45e9199df586eec688ce1a49bc92270ad321bd\n\nI know about [BuiltWithLaravel.com](http://BuiltWithLaravel.com) by Matt Stauffer, but here I have a bit different angle: I don't want to talk about *brands*, but my goal is real **stories**, including **numbers** whenever possible.\n\nLet me know if that repo can be improved for better readability, or if you know projects that could be added to that list.","output":"Top Answer:\nAwesome to have these kinds of examples!\n\nI'm not sure what dimension of scale you're looking for, since it's quite different for an open source project, but my project ([BookStack](https://www.bookstackapp.com/)) is used across 7k sites according to [what built-with](https://trends.builtwith.com/cms/BookStack) has found. \n\nIn terms of finances I'm not making silly money, but [as I detail here](https://www.bookstackapp.com/blog/decade-of-bookstack/#financials) I'm now receiving a healthy income, which I think is relatively good going for a business model of just donations, sponsors and support packages!","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1opahh7/ive_curated_a_list_of_30_large_laravelphp_projects/"}
|
| 42 |
+
{"instruction":"[Context: r/laravel] Filament v4 is now stable!\n\nThe first stable version of Filament v4 was just released. It brings an enormous amount of new features and improvements. To highlight a few:\n\n* Improved table performance\n* Custom table data\n* Nested resources\n* Multi-factor authentication\n* Unified action classes\n* Schema components\n* Dedicated form and table classes\n* New form fields\n* Partial rendering\n* Tailwind CSS v4\n\nToday also marks a new chapter for my Filament Themes platform, introducing a custom theme designer. \n \nThere’s way too much to discuss in a single post, so feel free to dig deeper using the links below:\n\n* Announcement: [https://filamentphp.com/content/alexandersix-filament-v4-is-stable](https://filamentphp.com/content/alexandersix-filament-v4-is-stable)\n* Detailed changes: [https://filamentphp.com/content/leandrocfe-whats-new-in-filament-v4](https://filamentphp.com/content/leandrocfe-whats-new-in-filament-v4)\n* GitHub release: [https://github.com/filamentphp/filament/releases/tag/v4.0.0](https://github.com/filamentphp/filament/releases/tag/v4.0.0)\n* Custom themes: [https://filamentthemes.com/themes/custom?utm\\_source=reddit&utm\\_medium=social&utm\\_campaign=custom+themes+early+access](https://filamentthemes.com/themes/custom?utm_source=reddit&utm_medium=social&utm_campaign=custom+themes+early+access)\n\nIf you want to upgrade right away, check out the upgrade guide with automated upgrade script: https://filamentphp.com/docs/4.x/upgrade-guide.","output":"Top Answer:\nAwesome work, and amazing to see the Theme Designer. Huge fan of your work 👌","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1mob1xf/filament_v4_is_now_stable/"}
|
| 43 |
+
{"instruction":"[Context: r/laravel] NativePHP for desktop v1 is finally here! 🚀\n\n","output":"Top Answer:\nCongrats. Nice to see the build service coming. \n\nI can see a day where my laravel app has a cloud, desktop and mobile version all which get 90% of their code from a common package.","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1juv61v/nativephp_for_desktop_v1_is_finally_here/"}
|
| 44 |
+
{"instruction":"[Context: r/laravel] Pest v4 is here — now with browser testing!\n\nBrowser tests that feel like unit tests: Laravel-ready, Playwright-powered, parallel-fast, with smoke & visual regression built in.\n\nDiscover Pest v4 — and our new website: [pestphp.com](https://pestphp.com/?ref=v4)","output":"Top Answer:\nMinor feedback on the landing page - the default selection is BrowserTest.php. Then you have three other options FeatureTest.php, UnitTest.php, ArchTest.php.\n\nIntuitively, you'd think you can click between them, but you cannot. Nothing works except BrowserTest.php.","source":"reddit_laravel","url":"https://www.reddit.com/r/laravel/comments/1mw5tby/pest_v4_is_here_now_with_browser_testing/"}
|
datasets/training/01_conversational_sft.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:34c5350f2c74ad1a9161cdd22481fc70923175abe1ff8aad6a5c5a041e258465
|
| 3 |
+
size 131092192
|
datasets/training/02_reasoning_with_thinking.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4a1d7c576b160790341d445f7d5125fa37bbca6727184e0555ded1859c6f0123
|
| 3 |
+
size 17563648
|
datasets/training/03_latest_features.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f6c9fa293df88ced2403a05347edc0073486863c6d04280acfbadbbc1f4b5ae6
|
| 3 |
+
size 49369041
|
datasets/training/04_advanced_workflows.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1a05b92ba89fa7cf1227e1f74a653bf6274a48b9fb9fb48607658cec078066e2
|
| 3 |
+
size 14027843
|
datasets/training/README.md
ADDED
|
@@ -0,0 +1,189 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# N8N Training Dataset Collection
|
| 2 |
+
|
| 3 |
+
High-quality curated subset of n8n workflow examples optimized for LLM training.
|
| 4 |
+
|
| 5 |
+
## Overview
|
| 6 |
+
|
| 7 |
+
**Total Examples:** 28,337
|
| 8 |
+
**Purpose:** Train LLMs for autonomous n8n workflow creation and error troubleshooting
|
| 9 |
+
|
| 10 |
+
## Dataset Files
|
| 11 |
+
|
| 12 |
+
### 01_conversational_sft.jsonl
|
| 13 |
+
**Examples:** 9,979
|
| 14 |
+
**Source:** eclaude/n8n-workflows-sft (HuggingFace)
|
| 15 |
+
**Format:** Supervised Fine-Tuning (SFT)
|
| 16 |
+
|
| 17 |
+
**Specialty:** Conversational workflow generation
|
| 18 |
+
|
| 19 |
+
**Structure:**
|
| 20 |
+
```json
|
| 21 |
+
{
|
| 22 |
+
"instruction": "Create a workflow that sends Slack notifications when...",
|
| 23 |
+
"response": "{n8n workflow JSON}"
|
| 24 |
+
}
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
**Use For:**
|
| 28 |
+
- Training conversational AI assistants
|
| 29 |
+
- Natural language → workflow conversion
|
| 30 |
+
- Chat-based workflow generation
|
| 31 |
+
|
| 32 |
+
---
|
| 33 |
+
|
| 34 |
+
### 02_reasoning_with_thinking.jsonl
|
| 35 |
+
**Examples:** 5,361
|
| 36 |
+
**Source:** ruh-ai/n8n-workflow-dataset (HuggingFace)
|
| 37 |
+
**Format:** JSONL with thinking chains
|
| 38 |
+
|
| 39 |
+
**Specialty:** Reasoning and debugging (UNIQUE - only dataset with thinking chains!)
|
| 40 |
+
|
| 41 |
+
**Structure:**
|
| 42 |
+
```json
|
| 43 |
+
{
|
| 44 |
+
"prompt": "Build a workflow to process CSV files...",
|
| 45 |
+
"thinking": "Step 1: We need a trigger... Step 2: Parse CSV... Step 3: Loop through rows...",
|
| 46 |
+
"json": "{n8n workflow JSON}"
|
| 47 |
+
}
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
**Use For:**
|
| 51 |
+
- Teaching logical reasoning
|
| 52 |
+
- Error troubleshooting
|
| 53 |
+
- Workflow debugging
|
| 54 |
+
- Explaining design decisions
|
| 55 |
+
|
| 56 |
+
---
|
| 57 |
+
|
| 58 |
+
### 03_latest_features.jsonl
|
| 59 |
+
**Examples:** 2,737
|
| 60 |
+
**Source:** mbakgun/n8nbuilder-n8n-workflows-dataset (HuggingFace)
|
| 61 |
+
**Updated:** December 26, 2024
|
| 62 |
+
|
| 63 |
+
**Specialty:** Most recent n8n features and nodes
|
| 64 |
+
|
| 65 |
+
**Use For:**
|
| 66 |
+
- Current n8n API patterns
|
| 67 |
+
- Latest node versions
|
| 68 |
+
- Modern integrations
|
| 69 |
+
- Avoiding deprecated patterns
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
### 04_advanced_workflows.json
|
| 74 |
+
**Examples:** 10,260
|
| 75 |
+
**Source:** Original validated collection
|
| 76 |
+
**Format:** JSONL (despite .json extension)
|
| 77 |
+
|
| 78 |
+
**Specialty:** Complex multi-node workflows
|
| 79 |
+
|
| 80 |
+
**Use For:**
|
| 81 |
+
- Advanced integration patterns
|
| 82 |
+
- Sophisticated business logic
|
| 83 |
+
- Production workflow examples
|
| 84 |
+
- Complex data transformations
|
| 85 |
+
|
| 86 |
+
---
|
| 87 |
+
|
| 88 |
+
## Training Recommendations
|
| 89 |
+
|
| 90 |
+
### Quick Start
|
| 91 |
+
```python
|
| 92 |
+
import json
|
| 93 |
+
from pathlib import Path
|
| 94 |
+
|
| 95 |
+
def load_training_data():
|
| 96 |
+
"""Load all training datasets."""
|
| 97 |
+
training_files = [
|
| 98 |
+
'01_conversational_sft.jsonl',
|
| 99 |
+
'02_reasoning_with_thinking.jsonl',
|
| 100 |
+
'03_latest_features.jsonl',
|
| 101 |
+
'04_advanced_workflows.json',
|
| 102 |
+
]
|
| 103 |
+
|
| 104 |
+
examples = []
|
| 105 |
+
for filename in training_files:
|
| 106 |
+
with open(filename, 'r', encoding='utf-8') as f:
|
| 107 |
+
first_char = f.read(1)
|
| 108 |
+
f.seek(0)
|
| 109 |
+
|
| 110 |
+
if first_char == '[':
|
| 111 |
+
examples.extend(json.load(f)) # JSON array
|
| 112 |
+
else:
|
| 113 |
+
examples.extend(json.loads(line) for line in f if line.strip())
|
| 114 |
+
|
| 115 |
+
return examples
|
| 116 |
+
|
| 117 |
+
data = load_training_data()
|
| 118 |
+
print(f"Loaded {len(data):,} training examples")
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
### Train/Val/Test Split
|
| 122 |
+
```python
|
| 123 |
+
from sklearn.model_selection import train_test_split
|
| 124 |
+
|
| 125 |
+
train, temp = train_test_split(data, test_size=0.2, random_state=42)
|
| 126 |
+
val, test = train_test_split(temp, test_size=0.5, random_state=42)
|
| 127 |
+
|
| 128 |
+
print(f"Train: {len(train):,}") # 22,670
|
| 129 |
+
print(f"Val: {len(val):,}") # 2,834
|
| 130 |
+
print(f"Test: {len(test):,}") # 2,833
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
## Why This Subset?
|
| 134 |
+
|
| 135 |
+
**Quality over Quantity:**
|
| 136 |
+
- ✅ Curated best examples from each category
|
| 137 |
+
- ✅ Complementary strengths (conversation, reasoning, latest, advanced)
|
| 138 |
+
- ✅ 3x faster training than full 81K dataset
|
| 139 |
+
- ✅ 3x lower compute cost
|
| 140 |
+
- ✅ No duplicates
|
| 141 |
+
|
| 142 |
+
**Unique Capabilities:**
|
| 143 |
+
- **Conversational:** Natural language understanding (01)
|
| 144 |
+
- **Reasoning:** Step-by-step logic and debugging (02)
|
| 145 |
+
- **Current:** Latest n8n features (03)
|
| 146 |
+
- **Advanced:** Complex patterns (04)
|
| 147 |
+
|
| 148 |
+
## Alternative: Full Dataset
|
| 149 |
+
|
| 150 |
+
For maximum coverage, see `../n8n_master.jsonl` (81,649 unique workflows).
|
| 151 |
+
|
| 152 |
+
Use the full dataset if:
|
| 153 |
+
- Quality subset shows coverage gaps
|
| 154 |
+
- Production deployment needs
|
| 155 |
+
- Comprehensive service knowledge required
|
| 156 |
+
|
| 157 |
+
## File Formats
|
| 158 |
+
|
| 159 |
+
All files use JSONL (JSON Lines) format:
|
| 160 |
+
- One JSON object per line
|
| 161 |
+
- Easy to stream
|
| 162 |
+
- Memory efficient
|
| 163 |
+
- Industry standard
|
| 164 |
+
|
| 165 |
+
**Note:** `04_advanced_workflows.json` is JSONL format despite the `.json` extension.
|
| 166 |
+
|
| 167 |
+
## Dataset Statistics
|
| 168 |
+
|
| 169 |
+
| Dataset | Examples | Size | Specialty |
|
| 170 |
+
|---------|----------|------|-----------|
|
| 171 |
+
| 01_conversational_sft | 9,979 | ~125 MB | Conversations |
|
| 172 |
+
| 02_reasoning_with_thinking | 5,361 | ~91 MB | Debugging |
|
| 173 |
+
| 03_latest_features | 2,737 | ~47 MB | Current |
|
| 174 |
+
| 04_advanced_workflows | 10,260 | ~13 MB | Advanced |
|
| 175 |
+
| **Total** | **28,337** | **~276 MB** | **Complete** |
|
| 176 |
+
|
| 177 |
+
## Related Documentation
|
| 178 |
+
|
| 179 |
+
- [Main README](../../README.md) - Repository overview
|
| 180 |
+
- [Datasets README](../README.md) - Full dataset collection info
|
| 181 |
+
- [Dataset Analysis](../../../.gemini/antigravity/brain/afbe61e0-35d6-4500-8f3e-e9431fc1db24/complete_dataset_analysis.md) - Detailed analysis
|
| 182 |
+
|
| 183 |
+
## License
|
| 184 |
+
|
| 185 |
+
These datasets are aggregated from various sources. Please check individual source licenses:
|
| 186 |
+
- eclaude datasets: Check HuggingFace repository
|
| 187 |
+
- ruh-ai dataset: Check HuggingFace repository
|
| 188 |
+
- mbakgun dataset: Check HuggingFace repository
|
| 189 |
+
- Original datasets: Part of n8n-docs repository
|
datasets/validate_datasets.py
ADDED
|
@@ -0,0 +1,155 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Dataset Validation and Analysis Tool
|
| 4 |
+
|
| 5 |
+
Validates and analyzes all n8n workflow training datasets,
|
| 6 |
+
supporting both JSON array and JSON Lines (JSONL) formats.
|
| 7 |
+
"""
|
| 8 |
+
|
| 9 |
+
import json
|
| 10 |
+
import os
|
| 11 |
+
from pathlib import Path
|
| 12 |
+
from typing import Dict, List, Any
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
def load_json_array(filepath: Path) -> List[Dict[str, Any]]:
|
| 16 |
+
"""Load standard JSON array format."""
|
| 17 |
+
with open(filepath, 'r', encoding='utf-8') as f:
|
| 18 |
+
return json.load(f)
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
def load_jsonl(filepath: Path) -> List[Dict[str, Any]]:
|
| 22 |
+
"""Load JSON Lines (JSONL) format."""
|
| 23 |
+
examples = []
|
| 24 |
+
with open(filepath, 'r', encoding='utf-8') as f:
|
| 25 |
+
for line_num, line in enumerate(f, 1):
|
| 26 |
+
line = line.strip()
|
| 27 |
+
if line: # Skip empty lines
|
| 28 |
+
try:
|
| 29 |
+
examples.append(json.loads(line))
|
| 30 |
+
except json.JSONDecodeError as e:
|
| 31 |
+
print(f" ⚠️ Line {line_num}: JSON decode error - {e}")
|
| 32 |
+
return examples
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
def detect_format(filepath: Path) -> str:
|
| 36 |
+
"""Detect if file is JSON array or JSONL format."""
|
| 37 |
+
with open(filepath, 'r', encoding='utf-8') as f:
|
| 38 |
+
first_char = f.read(1).strip()
|
| 39 |
+
if first_char == '[':
|
| 40 |
+
return 'json_array'
|
| 41 |
+
elif first_char == '{':
|
| 42 |
+
return 'jsonl'
|
| 43 |
+
else:
|
| 44 |
+
return 'unknown'
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
def validate_dataset(filepath: Path) -> Dict[str, Any]:
|
| 48 |
+
"""Validate and analyze a single dataset file."""
|
| 49 |
+
print(f"\n📊 Analyzing: {filepath.name}")
|
| 50 |
+
print(f" Size: {filepath.stat().st_size:,} bytes")
|
| 51 |
+
|
| 52 |
+
# Detect format
|
| 53 |
+
fmt = detect_format(filepath)
|
| 54 |
+
print(f" Format: {fmt.upper().replace('_', ' ')}")
|
| 55 |
+
|
| 56 |
+
result = {
|
| 57 |
+
'filename': filepath.name,
|
| 58 |
+
'size_bytes': filepath.stat().st_size,
|
| 59 |
+
'format': fmt,
|
| 60 |
+
'valid': False,
|
| 61 |
+
'example_count': 0,
|
| 62 |
+
'errors': []
|
| 63 |
+
}
|
| 64 |
+
|
| 65 |
+
# Load based on format
|
| 66 |
+
try:
|
| 67 |
+
if fmt == 'json_array':
|
| 68 |
+
examples = load_json_array(filepath)
|
| 69 |
+
elif fmt == 'jsonl':
|
| 70 |
+
examples = load_jsonl(filepath)
|
| 71 |
+
else:
|
| 72 |
+
result['errors'].append(f"Unknown format: {fmt}")
|
| 73 |
+
return result
|
| 74 |
+
|
| 75 |
+
result['valid'] = True
|
| 76 |
+
result['example_count'] = len(examples)
|
| 77 |
+
|
| 78 |
+
# Validate structure of first example
|
| 79 |
+
if examples:
|
| 80 |
+
first = examples[0]
|
| 81 |
+
required_fields = {'prompt', 'json', 'thinking'}
|
| 82 |
+
missing = required_fields - set(first.keys())
|
| 83 |
+
if missing:
|
| 84 |
+
result['errors'].append(f"Missing fields in examples: {missing}")
|
| 85 |
+
|
| 86 |
+
print(f" ✅ Valid: {len(examples):,} examples")
|
| 87 |
+
|
| 88 |
+
except Exception as e:
|
| 89 |
+
result['errors'].append(str(e))
|
| 90 |
+
print(f" ❌ Error: {e}")
|
| 91 |
+
|
| 92 |
+
return result
|
| 93 |
+
|
| 94 |
+
|
| 95 |
+
def main():
|
| 96 |
+
"""Main validation and analysis."""
|
| 97 |
+
print("=" * 60)
|
| 98 |
+
print("N8N DATASET VALIDATION & ANALYSIS")
|
| 99 |
+
print("=" * 60)
|
| 100 |
+
|
| 101 |
+
# Find all dataset files
|
| 102 |
+
datasets_dir = Path(__file__).parent
|
| 103 |
+
dataset_files = sorted(datasets_dir.glob('dataset_*.json'))
|
| 104 |
+
|
| 105 |
+
if not dataset_files:
|
| 106 |
+
print("⚠️ No dataset files found!")
|
| 107 |
+
return
|
| 108 |
+
|
| 109 |
+
results = []
|
| 110 |
+
for filepath in dataset_files:
|
| 111 |
+
result = validate_dataset(filepath)
|
| 112 |
+
results.append(result)
|
| 113 |
+
|
| 114 |
+
# Summary
|
| 115 |
+
print("\n" + "=" * 60)
|
| 116 |
+
print("SUMMARY")
|
| 117 |
+
print("=" * 60)
|
| 118 |
+
|
| 119 |
+
total_examples = sum(r['example_count'] for r in results)
|
| 120 |
+
total_size = sum(r['size_bytes'] for r in results)
|
| 121 |
+
valid_count = sum(1 for r in results if r['valid'])
|
| 122 |
+
|
| 123 |
+
print(f"\n📁 Total Datasets: {len(results)}")
|
| 124 |
+
print(f"✅ Valid: {valid_count}")
|
| 125 |
+
print(f"❌ Invalid: {len(results) - valid_count}")
|
| 126 |
+
print(f"📝 Total Examples: {total_examples:,}")
|
| 127 |
+
print(f"💾 Total Size: {total_size / (1024 * 1024):.2f} MB")
|
| 128 |
+
|
| 129 |
+
# Detailed breakdown
|
| 130 |
+
print("\n" + "-" * 60)
|
| 131 |
+
print(f"{'Dataset':<20} {'Format':<12} {'Examples':>10} {'Size':>12}")
|
| 132 |
+
print("-" * 60)
|
| 133 |
+
|
| 134 |
+
for r in results:
|
| 135 |
+
size_mb = r['size_bytes'] / (1024 * 1024)
|
| 136 |
+
status = "✅" if r['valid'] else "❌"
|
| 137 |
+
fmt = r['format'].replace('_', ' ').title()
|
| 138 |
+
print(f"{status} {r['filename']:<18} {fmt:<12} {r['example_count']:>10,} {size_mb:>10.2f} MB")
|
| 139 |
+
|
| 140 |
+
# Errors
|
| 141 |
+
errors = [r for r in results if r['errors']]
|
| 142 |
+
if errors:
|
| 143 |
+
print("\n" + "=" * 60)
|
| 144 |
+
print("ERRORS")
|
| 145 |
+
print("=" * 60)
|
| 146 |
+
for r in errors:
|
| 147 |
+
print(f"\n❌ {r['filename']}:")
|
| 148 |
+
for err in r['errors']:
|
| 149 |
+
print(f" • {err}")
|
| 150 |
+
|
| 151 |
+
print("\n" + "=" * 60)
|
| 152 |
+
|
| 153 |
+
|
| 154 |
+
if __name__ == '__main__':
|
| 155 |
+
main()
|
datasets/youtube_metadata.jsonl
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"title":"n8n Tutorial for Beginners: How to Build AI Automations for FREE (Step-by-Step)","url":"https://www.youtube.com/watch?v=Fy1UCBcgF2o&pp=ygUabjhuIHR1dG9yaWFsIGZvciBiZWdpbm5lcnM%3D","description":"Save tons of money on n8n by self-hosting (only $4.99/mo) with Hostinger: \nhttps://bit.ly/hostinger-vps-n8n\n^Use code CHARLIECHANG for an exclusive discount!\n\n^If you want to self host n8n (save tons of money versus getting an n8n cloud plan), then I recommend using Hostinger's VPS hosting. It's under $7 a month and super easy to set up. And you get the lowest possible price using my link and code :) Make sure you use that exact link because it will get you the n8n install automatically, and you can follow along with the video. This is what I use to run my own VPS and n8n automations since it saves a significant amount of money.\n\nIn this video, I'm going to give you a complete course on how to use the AI tool n8n, and I guarantee that you'll be able to build your own AI automations by the end of this video. We'll be diving deep into the basics of n8n, building your first workflow, and how you can ultimately use this tool to automate things within business and your personal life. \n\nCheck out n8n here:\nhttps://bit.ly/n8n-trial\n\n^If you want to just get one of n8n's cloud plans, they start at $24/month billed monthly. It's quite expensive, so that's why I recommend going the VPS route and self-hosting. It's a few more steps by it saves you a couple hundred dollars per year and you'll save even more if you use my Hostinger discount above.\n\nJoin my free AI Community (templates, workflows, AI news, making money with AI, etc): https://www.skool.com/ai-os\n\nIf you watch this short 19 minute course, you will leave extremely comfortable with using n8n yourself, even if you're a complete beginner with little coding experience. I recommend that you not only watch this video, but also follow along as I am showing you how these workflows can work for your specific situation. In business and entrepreneurship, I've found that I learn a lot better when I actually do the thing instead of trying to watch it online, and so that's why I recommend that you play around with n8n along with this video if you're also a hands-on learner like me.\n\n► Get access to my FREE side hustle courses:\nhttps://www.sidehustlemastery.com\n\nMy favorite finance + business products:\n💳 My favorite credit cards: https://yourbestcreditcards.com/card-...\n🏦 Favorite online savings accounts: https://www.bankrate.com/landing/char...\n📈 Get up to 12 Free Stocks on WeBull: https://bit.ly/webull12stocks\n🖥️ Best AI website builder (Less than $3/month using code CHARLIECHANG): https://hostinger.com/charliechang\n🥇 Hire the top 1% of overseas talent: https://paired.so\n\n► Join my FREE newsletter:\nhttps://buildabetterbusiness.co/subsc...\n\nIf you want to learn more about AI and how you can use it to start your own business and get started with entrepreneurship, be sure to check out my other videos on this channel on those topics:\n\nChatGPT Tutorial: How to Use Chat GPT For Beginners:\n • ChatGPT Tutorial: How to Use Chat GPT For ... \n\nHow To Use ChatGPT To Learn ANY Skill Quickly (Tutorial):\n • How To Use ChatGPT To Learn ANY Skill Quic... \n\nI hope you guys found this video helpful, and if you did please SHARE it with a friend or family member who you think could benefit and also LIKE and subscribe for more videos like this in the future!\n\nThank you for watching, and I hope you have a wonderful rest of your day!\n\n-Charlie\n\n#n8n #ai #freecourse \n\nTimeline:\n0:00 - Intro\n0:34 - What Is n8n?\n1:11 - n8n Pricing\n1:33 - Self Hosting n8n With Hostinger's VPS Hosting\n3:22 - n8n Setup and Dashboard\n4:06 - n8n Walkthrough\n5:48 - n8n Templates\n8:08 - Adding Credentials Into Your Account\n9:06 - Workflow Example\n17:26 - The Best Way To Learn n8n\n\nDisclaimer: Some of the links above may be affiliate links, which means that if you click on them I may receive a small commission. The commission is paid by the retailers, at no cost to you, and this helps to support our channel and keep our videos free. Thank you! \n\nIn addition, I am not a financial advisor. Charlie Chang does not provide tax, legal or accounting advice. The ideas presented in this video are for entertainment purposes only. Please do your own due diligence before making any financial decisions.\n\n► My Instagram: / charlie__chang Save tons of money on n8n by self-hosting (only $4.99/mo) with Hostinger: \nhttps://bit.ly/hostinger-vps-n8n\n^Use code CHARLIECHANG for an exclusive discount! …...more\n...more\n\n\n\n \n\n\n\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Intro\n Intro\n 0:00\n \n\n\n \n Intro\n \n 0:00\n\n\n\n\n \n \n \n \n \n What Is n8n?\n What Is n8n?\n 0:34\n \n\n\n \n What Is n8n?\n \n 0:34\n\n\n\n\n \n \n \n \n \n n8n Pricing\n n8n Pricing\n 1:11\n \n\n\n \n n8n Pricing\n \n 1:11\n\n\n\n\n \n \n \n \n \n Self Hosting n8n With Hostinger's VPS Hosting\n Self Hosting n8n With Hostinger's VPS Hosting\n 1:33\n \n\n\n \n Self Hosting n8n With Hostinger's VPS Hosting\n \n 1:33\n\n\n\n\n \n \n \n \n \n n8n Setup and Dashboard\n n8n Setup and Dashboard\n 3:22\n \n\n\n \n n8n Setup and Dashboard\n \n 3:22\n\n\n\n\n \n \n \n \n \n n8n Walkthrough\n n8n Walkthrough\n 4:06\n \n\n\n \n n8n Walkthrough\n \n 4:06\n\n\n\n\n \n \n \n \n \n n8n Templates\n n8n Templates\n 5:48\n \n\n\n \n n8n Templates\n \n 5:48\n\n\n\n\n \n \n \n \n \n Adding Credentials Into Your Account\n Adding Credentials Into Your Account\n 8:08\n \n\n\n \n Adding Credentials Into Your Account\n \n 8:08\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n Charlie Chang\n \n 1.4M subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAboutSubscribe!InstagramTikTokLinkedIn\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"n8n tutorial for beginners","source":"youtube"}
|
| 2 |
+
{"title":"The ONLY n8n Tutorial You Need for Beginners | How to Build AI Automations for FREE (Step-by-Step)","url":"https://www.youtube.com/watch?v=uK42llzHjmY&pp=ygUabjhuIHR1dG9yaWFsIGZvciBiZWdpbm5lcnM%3D","description":"🚀 Set up your own self-hosted n8n server with Hostinger and save ~80%. This is the exact setup I use: https://joshuamayo.com/n8n\n\nIn this beginner-friendly n8n tutorial, you’ll learn how to build powerful AI automations from scratch, even if you’ve never used automation tools before. We walk step-by-step through setting up n8n, understanding workflows, nodes, triggers, filters, and switches, and building a real automation that saves form submissions, filters leads, and sends automated emails.\n\nYou’ll also learn how to self-host n8n for FREE (or nearly free) using a private server, saving up to 80% compared to n8n’s hosted plans. This video breaks everything down in simple terms so you actually understand how n8n works — not just copy templates blindly.\n\nIf you’re coming from tools like Zapier or IFTTT and want more control, flexibility, and AI-powered workflows, this is the only n8n tutorial you need to get started.\n\nBy the end of this video, you’ll know how to:\n• Set up n8n step-by-step\n• Build your first automation from scratch\n• Use triggers, filters, switches, and credentials\n• Connect Google Sheets, Gmail, and forms\n• Create smarter AI-powered workflows\n• Self-host n8n to save money long-term\n\nPerfect for beginners, creators, entrepreneurs, and anyone looking to automate their business with AI.\n\n---\n\n📌 Watch Next\n👉 Watch this next: • The ONLY Amazon Affiliate Marketing Tutori... \n\n---\n\n🔗 Resources\n🚀 Finally start a business with Self Made Blueprint → https://selfmadeblueprint.com/ \n🎥 Grow on YouTube with Pro YouTuber Course → https://proyoutuber.com/ \n💡 Find Your Next Business Idea at Porkmoo → https://porkmoo.com \n\n---\n\n⏱️ Timestamps:\n00:00 - Introduction to n8n and AI Automations\n00:54 - What is n8n? (Simple Definition & Use Cases)\n02:34 - How to Host n8n for 80% Cheaper (Self-Hosting Guide)\n05:07 - Using a Coupon Code for Extra Savings\n05:39 - Setting Up n8n on Your VPS\n06:23 - Initial n8n Account Setup & License Activation\n07:22 - Understanding Key n8n Concepts (Nodes, Triggers, Workflows)\n10:50 - Navigating the n8n Dashboard\n11:50 - How to Use n8n Templates to Save Time\n13:20 - Step-by-Step Tutorial: Building Your First Workflow\n14:43 - Step 1: Creating a Form Submission Trigger\n16:15 - Understanding Data Outputs and Pinning Data\n17:20 - Step 2: Connecting and Mapping Google Sheets\n20:02 - Step 3: Adding Filters to Your Workflow\n21:46 - Step 4: Setting Up a Switch (Branching Paths)\n22:23 - Step 5: Automating Occupation-Specific Emails (Gmail Node)\n24:06 - Final Summary and How to Get Good at n8n \n\n\n---\n\n🛠️ Tools I Use \n📷 All of my equipment → https://www.amazon.com/shop/joshuamayo \n🎵 My music → https://joshuamayo.com/epidemic \n🎬 My title effects → https://joshuamayo.com/motionvfx \n🎨 My graphics + stock footage → https://joshuamayo.com/envato \n🖼️ How I make my thumbnails pop → https://joshuamayo.com/luminar \n📈 My secret tool for ranking videos → https://joshuamayo.com/vidiq \n\n---\n\n👤 Connect With Me \n📸 Instagram → / realjoshuamayo \n🐷 Second Channel (Porkmoo) → / @itsporkmoo \n🌐 Website → https://joshuamayo.com \n\n---\n\n⚠️ Disclaimer \nThis video may contain affiliate links. If you click and buy, I may earn a small commission at no extra cost to you. This helps support the channel and keeps the content free.🚀 Set up your own self-hosted n8n server with Hostinger and save ~80%. This is the exact setup I use: https://joshuamayo.com/n8n\n …...more\n...more\n\n\n\n \n\n\n\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Introduction to n8n and AI Automations\n Introduction to n8n and AI Automations\n 0:00\n \n\n\n \n Introduction to n8n and AI Automations\n \n 0:00\n\n\n\n\n \n \n \n \n \n What is n8n? (Simple Definition & Use Cases)\n What is n8n? (Simple Definition & Use Cases)\n 0:54\n \n\n\n \n What is n8n? (Simple Definition & Use Cases)\n \n 0:54\n\n\n\n\n \n \n \n \n \n How to Host n8n for 80% Cheaper (Self-Hosting Guide)\n How to Host n8n for 80% Cheaper (Self-Hosting Guide)\n 2:34\n \n\n\n \n How to Host n8n for 80% Cheaper (Self-Hosting Guide)\n \n 2:34\n\n\n\n\n \n \n \n \n \n Using a Coupon Code for Extra Savings\n Using a Coupon Code for Extra Savings\n 5:07\n \n\n\n \n Using a Coupon Code for Extra Savings\n \n 5:07\n\n\n\n\n \n \n \n \n \n Setting Up n8n on Your VPS\n Setting Up n8n on Your VPS\n 5:39\n \n\n\n \n Setting Up n8n on Your VPS\n \n 5:39\n\n\n\n\n \n \n \n \n \n Initial n8n Account Setup & License Activation\n Initial n8n Account Setup & License Activation\n 6:23\n \n\n\n \n Initial n8n Account Setup & License Activation\n \n 6:23\n\n\n\n\n \n \n \n \n \n Understanding Key n8n Concepts (Nodes, Triggers, Workflows)\n Understanding Key n8n Concepts (Nodes, Triggers, Workflows)\n 7:22\n \n\n\n \n Understanding Key n8n Concepts (Nodes, Triggers, Workflows)\n \n 7:22\n\n\n\n\n \n \n \n \n \n Navigating the n8n Dashboard\n Navigating the n8n Dashboard\n 10:50\n \n\n\n \n Navigating the n8n Dashboard\n \n 10:50\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n Joshua Mayo\n \n 853K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAboutAdd me on Instagram\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"n8n tutorial for beginners","source":"youtube"}
|
| 3 |
+
{"title":"n8n Tutorial for Beginners - Build Your First Free AI Agent","url":"https://www.youtube.com/watch?v=dpoMEcXjVH8&pp=ygUabjhuIHR1dG9yaWFsIGZvciBiZWdpbm5lcnM%3D","description":"This n8n tutorial shows you how to build your first AI agent—totally free and without needing to know how to code.\n🌎 [Sponsor] Check out QuickBooks Online ➜ https://quickbooks.kevinstratvert.com/\n\nFirst, you’ll learn what makes an AI agent different from a simple workflow or chatbot, so you can start thinking like a builder, not just a user.\n\nNext, you’ll get hands-on with n8n, an open-source automation platform that combines drag-and-drop power with AI reasoning and tool access.\n\nThen, I’ll walk you step-by-step through installing n8n locally using Docker and show how to create your first automation, all for free.\n\nAfter that, we’ll build a real AI agent that connects to Gemini, reads overdue invoices from QuickBooks, and drafts personalized email reminders to customers.\n\nYou’ll also learn how to write better prompts for your agents using a framework that blends Role, Task, Tools, and Output to get the most reliable behavior.\n\nNext, you’ll see how to test, troubleshoot, and inspect your AI agent’s behavior—down to every message it sends—so you stay in control.\n\nFinally, we’ll go one level deeper by adding a “Human in the Loop” approval step via Discord, giving you final review power before anything goes out.\n\nHost: David DeWinter\nSponsor: Intuit\n \n📚 RESOURCES\nDocker Desktop ➜ https://www.docker.com/products/docke...\nGoogle AI Studio (Gemini API Key) ➜ https://aistudio.google.com\nGemini API Limits ➜ https://ai.google.dev/gemini-api/docs...\nGmail App Password ➜ http://security.google.com ➜ Search for App Password \n\n🧑💻 COMMANDS\ndocker volume create n8n_data\ndocker run -it --rm --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n\n\nFor n8n Loop:\n{{ $('Loop Over Items').item.json.propertyName }}\n\n\n⌚ TIMESTAMPS\n00:00 - Intro\n00:32 - What is an AI Agent?\n01:48 - n8n Free Setup\n04:02 - Your First Agent\n15:58 - Human in the Loop\n22:51 - Wrap-up\n\n💬 PROMPTS\nHelp me write a prompt for an AI Agent using the key components: Role, Task, Input, Tools, and Outputs. I'll write a brief below, but ask me any questions you need answers to write the final prompt.\n\nTask\nTo draft an send late payment messages to customers who have not yet paid their invoices in QuickBooks Online yet and are past due.\n\nContext\nAgent should be polite when customers are under 7 days overdue, but as the weeks go by, the agent should become more and more firm. The agent should make its own subject and message for each customer.\n\nTools\nYou have access to the following tools:\n'getInvoices(query)': Gets all invoices. Include \"WHERE DueDate (insert less than symbol) ‘[Today]’\" in the query string as the \"query\" parameter to get invoices that are overdue.\n'sendPaymentReminder(customerEmail, subject, message)': Sends an email with a model-defined subject and message to the customer.\n\nOutputs\nReturn a summary of how many emails were sent in JSON. For each email, write the customer's name, the amount of the invoice, how many days overdue they were, the subject of the email, and the body of the email.\n----\nTo get ChatGPT to ask you questions to clarify the role of your agent:\nAsk me questions to clarify any ambiguity in how the agent needs to behave.\n----\nTo get ChatGPT to output the prompt so you can copy and paste it:\nOutput this entire prompt as a markdown file, making sure to escape any code sections so it can be copied.\n\n📩 NEWSLETTER\nGet the latest high-quality tutorial and tips and tricks videos emailed to your inbox each week: https://kevinstratvert.com/newsletter/\n \n🔽 CONNECT WITH ME\nOfficial website: http://www.kevinstratvert.com\nLinkedIn: / kevinstratvert \nDiscord: https://bit.ly/KevinStratvertDiscord\nTwitter: / kevstrat \nFacebook: / kevin-stratvert-101912218227818 \nTikTok: / kevinstratvert \nInstagram: / kevinstratvert \n \n🎁 TOOLS AND DISCOUNTS\n✅ 🤖 ElevenLabs Text-to-Speech | https://try.elevenlabs.io/taqepq60mptr\n✅ 💵 Quickbooks Online | https://bit.ly/intuitquickbooksonline\n✅ 👥 Hubspot | https://hubspot.sjv.io/DKo6jb\n✅ 📈 Semrush PRO | https://bit.ly/semrush14dayfreetrial\n✅ 📈 Semrush GURU | https://bit.ly/semrushguru14daytrial\n✅ 📈 Semrush ContentShake AI | https://bit.ly/contentshakeaisemrush\n✅ 🎥 Descript | https://get.descript.com/sf22jb63w2tx\n✅ 🏓 Smartsheet | https://bit.ly/trysmartsheet\n✅ 🚄 Miro | https://miro.kevinstratvert.com\n \n🎒 MY COURSES\nGo from Excel novice to data analysis ninja in just 2 hours: https://kevinstratvert.thinkific.com/\n \n🙏 REQUEST VIDEOS\nhttps://forms.gle/BDrTNUoxheEoMLGt5\n \n🔔 SUBSCRIBE ON YOUTUBE\nhttps://www.youtube.com/user/kevlers?...\n \n🙌 SUPPORT THE CHANNEL\nHit the THANKS button in any video!\nAmazon affiliate link: https://amzn.to/3kCP2yz\n \n⚖ DISCLOSURE\nSome links are affiliate links. Purchasing through these links gives me a small commission to support videos on this channel. The price to you is the same.\n \n#stratvert #n8nThis n8n tutorial shows you how to build your first AI agent—totally free and without needing to know how to code.\n🌎 [Sponsor] Check out QuickBooks Online ➜ https://quickbooks.kevinstratvert.com/\n …...more\n...more\n\n\n\n \n\n\n\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Intro\n Intro\n 0:00\n \n\n\n \n Intro\n \n 0:00\n\n\n\n\n \n \n \n \n \n What is an AI Agent?\n What is an AI Agent?\n 0:32\n \n\n\n \n What is an AI Agent?\n \n 0:32\n\n\n\n\n \n \n \n \n \n n8n Free Setup\n n8n Free Setup\n 1:48\n \n\n\n \n n8n Free Setup\n \n 1:48\n\n\n\n\n \n \n \n \n \n Your First Agent\n Your First Agent\n 4:02\n \n\n\n \n Your First Agent\n \n 4:02\n\n\n\n\n \n \n \n \n \n Human in the Loop\n Human in the Loop\n 15:58\n \n\n\n \n Human in the Loop\n \n 15:58\n\n\n\n\n \n \n \n \n \n Wrap-up\n Wrap-up\n 22:51\n \n\n\n \n Wrap-up\n \n 22:51\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n Kevin Stratvert\n \n 4.06M subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAboutInstagramTikTokFacebookLinkedInTwitter\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"n8n tutorial for beginners","source":"youtube"}
|
| 4 |
+
{"title":"Laravel 11 Tutorial for Beginners - Laravel Crash Course (2024)","url":"https://www.youtube.com/watch?v=eUNWzJUvkCA&pp=ygUTbGFyYXZlbCAxMSB0dXRvcmlhbA%3D%3D","description":"Check my Laravel for Beginners Course: \nhttps://thecodeholic.com/p/laravel-11...\n\nThis video is the perfect starting point if you want to get started with Laravel 11. In this crash course we will learn the most fundamental topics in Laravel and create basic application with registration and login.\n\n🚀 Get 3 months of Hosting FOR FREE + FREE domain. Check https://hostinger.com/zuratc. Use coupon code \"ZURATC\" for an EXTRA 10% OFF 🚀\n\n🎬 • How to Deploy Laravel on Shared Hosting in... \n🎬 • Deploy Laravel on VPS using ChatGPT #chatgpt \n🎬 • Laravel Hosting with Github Actions - Part... \n⭐ Project Repository: https://bit.ly/3T7tDjD\n🔔 Subscribe: https://bit.ly/2xTQOI0\n\nIf you really love my content and want to support the channel:\n🛒Check my Website: https://thecodeholic.com\n🌟 Become a Patron: / thecodeholic \n🍺 Buy me Beer: https://www.buymeacoffee.com/thecodeh...\n\nTime Codes\n-------------------------------\n00:00:00 - Introduction\n00:01:48 - Setup Working Environment\n00:03:53 - Project Setup\n00:07:27 - Directory Structure\n00:12:32 - Introduction to Artisan\n00:13:33 - Laravel 11 Configuration\n00:15:56 - Create Basic Route and Controller\n00:19:09 - Generate Models and Migrations\n00:21:57 - Generate Factory and Create Seed Data\n00:24:25 - Generate Resource Controller\n00:27:21 - Generate Resource Routes\n00:31:35 - Create Blade Files for CRUD\n00:34:08 - Generate Layout\n00:37:42 - Render Notes\n00:46:11 - Include CSS and JS Files\n00:50:30 - Define Views\n00:54:15 - Note Create\n00:58:52 - Note Update\n01:01:03 - Note Delete\n01:03:07 - Implement Pagination\n01:04:36 - Add Registration and Login\n01:14:46 - Filter Notes by Authenticated User\n01:17:56 - Adjust Navigation Links\n01:20:50 - Customize Forbidden View\n01:21:46 - Conclusion\n\n🖱️Follow me on social media:🖱️\nhttps://x.com/thecodeholic\n / thecodeholic \n / thecodeholic Check my Laravel for Beginners Course: \nhttps://thecodeholic.com/p/laravel-11...\n …...more\n...more\n\n\n\n \n\n\n\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Introduction\n Introduction\n 0:00\n \n\n\n \n Introduction\n \n 0:00\n\n\n\n\n \n \n \n \n \n Setup Working Environment\n Setup Working Environment\n 1:48\n \n\n\n \n Setup Working Environment\n \n 1:48\n\n\n\n\n \n \n \n \n \n Project Setup\n Project Setup\n 3:53\n \n\n\n \n Project Setup\n \n 3:53\n\n\n\n\n \n \n \n \n \n Directory Structure\n Directory Structure\n 7:27\n \n\n\n \n Directory Structure\n \n 7:27\n\n\n\n\n \n \n \n \n \n Introduction to Artisan\n Introduction to Artisan\n 12:32\n \n\n\n \n Introduction to Artisan\n \n 12:32\n\n\n\n\n \n \n \n \n \n Laravel 11 Configuration\n Laravel 11 Configuration\n 13:33\n \n\n\n \n Laravel 11 Configuration\n \n 13:33\n\n\n\n\n \n \n \n \n \n Create Basic Route and Controller\n Create Basic Route and Controller\n 15:56\n \n\n\n \n Create Basic Route and Controller\n \n 15:56\n\n\n\n\n \n \n \n \n \n Generate Models and Migrations\n Generate Models and Migrations\n 19:09\n \n\n\n \n Generate Models and Migrations\n \n 19:09\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n The Codeholic\n \n 105K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAboutTwitterFacebook\n \n \n \n \n \n \n\n\n \n \n 1:37:21\n 1:37:21\n \n \n \n\n\n\n \n Laravel 11 + React SaaS with Stripe Integration\n by The Codeholic\n \n \n\n\n\n \n \n \n \n Laravel for Beginners Full Course\n \n Learn More\n \n \n \n \n\n\n\n \n \n 15:48\n 15:48\n \n \n \n\n\n\n \n Setup VSCode for Laravel Development\n by The Codeholic\n \n \n\n\n\n \n \n 11:57:02\n 11:57:02\n \n \n \n\n\n\n \n React JS + Laravel Real Time Chat Application - Build and Deploy\n by The Codeholic\n \n \n\n\n\n \n \n \n 47\n \n47\n\n \n\n\n Build Social Media Website with Laravel from Start to Finish\n by The Codeholic\n \n\n\n\n\n\n \n\n\n Show less","query":"laravel 11 tutorial","source":"youtube"}
|
| 5 |
+
{"title":"How to Create a Laravel API: Explained in 14 Minutes","url":"https://www.youtube.com/watch?v=WVNiiov53CE&pp=ygUTbGFyYXZlbCAxMSB0dXRvcmlhbA%3D%3D","description":"A summarized version of my re-freshed course on Laravel APIs.\n\nFull course \"How to Build Laravel API\": https://laraveldaily.com/course/api-l...\nArticle \"6 Bad Practices When Building Laravel APIs\" https://laraveldaily.com/post/bad-pra...\n\nSupport the channel by checking out my products:\nMy Laravel courses: https://laraveldaily.com/courses\nFilament examples: https://filamentexamples.com\nLivewire Kit Components: https://livewirekit.com\n\nOther places to follow:\nMy weekly Laravel newsletter: https://us11.campaign-archive.com/hom...\nMy personal Twitter: / povilaskorop A summarized version of my re-freshed course on Laravel APIs.\n\n …...more\n...more\n\n\n\n \n\n\nHow this was madeAuto-dubbedAudio tracks for some languages were automatically generated. Learn more\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n Laravel Daily\n \n 159K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAbout\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"laravel 11 tutorial","source":"youtube"}
|
| 6 |
+
{"title":"PHP Fundamentals [FULL COURSE]","url":"https://www.youtube.com/watch?v=EX3qQqdm16I&pp=ygUTbGFyYXZlbCAxMSB0dXRvcmlhbA%3D%3D","description":"Learn the essentials of modern PHP in this beginner-friendly course. Whether you're new to PHP or coming from another language, this course provides a solid foundation for PHP development and prepares you for working with Laravel. In 10 concise lessons, we'll cover the core concepts you need to get started with PHP.\n\nLessons included:\n\n1️⃣ Why PHP - Discover PHP's role in modern web development\n2️⃣ Setup - Get your development environment ready\n3️⃣ Variables & Types - Learn PHP's basic building blocks\n4️⃣ Arrays - Master PHP's powerful data structures\n5️⃣ Functions - Write reusable code with modern syntax\n6️⃣ Loops - Handle data iteration effectively\n7️⃣ Classes - Understand object-oriented programming\n8️⃣ Modern PHP - Explore PHP's latest features\n9️⃣ Composer - Manage dependencies with PHP's package manager\n🔟 Your First PHP Application - Build a complete CLI app\n\n☀️ Demo app repository: https://github.com/laravel/php-fundam...\n\n\nTimeline:\n\n00:00 Why PHP?\n01:41 Setup\n03:50 Variables & Types\n08:53 Arrays\n15:48 Functions\n21:10 Loops\n31:15 Classes\n43:35 Modern PHP\n56:39 Composer\n01:02:30 Your First PHP ApplicationLearn the essentials of modern PHP in this beginner-friendly course. Whether you're new to PHP or comi …...more\n...more\n\n\n\n \n\n\n\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Why PHP?\n Why PHP?\n 0:00\n \n\n\n \n Why PHP?\n \n 0:00\n\n\n\n\n \n \n \n \n \n Setup\n Setup\n 1:41\n \n\n\n \n Setup\n \n 1:41\n\n\n\n\n \n \n \n \n \n Variables & Types\n Variables & Types\n 3:50\n \n\n\n \n Variables & Types\n \n 3:50\n\n\n\n\n \n \n \n \n \n Arrays\n Arrays\n 8:53\n \n\n\n \n Arrays\n \n 8:53\n\n\n\n\n \n \n \n \n \n Functions\n Functions\n 15:48\n \n\n\n \n Functions\n \n 15:48\n\n\n\n\n \n \n \n \n \n Loops\n Loops\n 21:10\n \n\n\n \n Loops\n \n 21:10\n\n\n\n\n \n \n \n \n \n Classes\n Classes\n 31:15\n \n\n\n \n Classes\n \n 31:15\n\n\n\n\n \n \n \n \n \n Modern PHP\n Modern PHP\n 43:35\n \n\n\n \n Modern PHP\n \n 43:35\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n Laravel\n \n 73.7K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAbout\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"laravel 11 tutorial","source":"youtube"}
|
| 7 |
+
{"title":"Docker Crash Course for Absolute Beginners [NEW]","url":"https://www.youtube.com/watch?v=pg19Z8LL06w&pp=ygUTZG9ja2VyIGNyYXNoIGNvdXJzZQ%3D%3D","description":"► Grab your DevOps Roadmap PDF here: https://bit.ly/3GItMY1\n\nDocker Tutorial for Beginners that teaches you everything you need to get started\n\n💙 Full Docker course ► • Docker Tutorial for Beginners [FULL COURSE... \n💚 Docker in complete DevOps process ► https://bit.ly/42ZMSiR\n\n💛 Connect on Instagram ► https://bit.ly/2F3LXYJ\n💛 Connect on LinkedIn ► https://bit.ly/3hWOLVT\n\n#docker #dockertutorial #techworldwithnana \n\n► This video is sponsored by Nethopper 🙌🏼\n► Learn more about Nethopper KAOPS here: https://www.nethopper.io/\n\nLearn the basic building blocks of Docker in an easy and understandable way.\nBy the end of this Docker tutorial, you will have a deep understanding of the concepts and a great overall big picture of how Docker is used in the whole software development process. \nThe course is a mix of animated theoretic explanation and hands-on demos to follow along, so you get your first hands-on experience with Docker.\n\n🔗 Links\n► Git Repo for this tutorial: https://gitlab.com/nanuchi/docker-in-...\n► Download and install Docker: https://docs.docker.com/get-docker/\n\n▬▬▬▬▬▬ T I M E S T A M P S ⏰ ▬▬▬▬▬▬\n0:00 - Intro and Course Overview\n02:54 - What is Docker?\n03:51 - What problems Docker solves in development and deployment process\n11:38 - Virtual Machine vs Docker\n17:19 - Install Docker\n21:36 - Docker Images vs Containers\n26:32 - Docker Registries\n29:38 - Docker Image Versions\n32:02 - Main Docker Commands - Pull and Run Docker containers\n39:06 - Port Binding\n42:50 - Start and Stop containers\n46:54 - Private Docker Registries\n48:11 - Registry vs Repository\n49:09 - Dockerfile - Dockerize Node.js app\n58:30 - Build Image\n1:02:39 - Docker UI Client\n1:03:39 - Overview: Docker in complete software development lifecycle\n1:06:38 - Where to go from here\n\n\n▬▬▬▬▬▬ Want to learn more? 🚀 ▬▬▬▬▬▬ \nFull Python course ► • Python Tutorial for Beginners - Learn Pyth... \nFull Docker course ► • Docker Tutorial for Beginners [FULL COURSE... \nFull K8s course ► • Kubernetes Tutorial for Beginners [FULL CO... \nDevOps Tools explained ► https://bit.ly/2W9UEq6\n\n\n▬▬▬▬▬▬ Connect with me 👋 ▬▬▬▬▬▬ \nINSTAGRAM ► https://bit.ly/2F3LXYJ\nTWITTER ► https://bit.ly/3i54PUB\nLINKEDIN ► https://bit.ly/3hWOLVT► Grab your DevOps Roadmap PDF here: https://bit.ly/3GItMY1\n\nDocker Tutorial for Beginners that teaches you everything you need to get started …...more\n...more\n\n\n\n \n\n\n\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Intro and Course Overview\n Intro and Course Overview\n 0:00\n \n\n\n \n Intro and Course Overview\n \n 0:00\n\n\n\n\n \n \n \n \n \n What is Docker?\n What is Docker?\n 2:54\n \n\n\n \n What is Docker?\n \n 2:54\n\n\n\n\n \n \n \n \n \n What problems Docker solves in development and deployment process\n What problems Docker solves in development and deployment process\n 3:51\n \n\n\n \n What problems Docker solves in development and deployment process\n \n 3:51\n\n\n\n\n \n \n \n \n \n Virtual Machine vs Docker\n Virtual Machine vs Docker\n 11:38\n \n\n\n \n Virtual Machine vs Docker\n \n 11:38\n\n\n\n\n \n \n \n \n \n Install Docker\n Install Docker\n 17:19\n \n\n\n \n Install Docker\n \n 17:19\n\n\n\n\n \n \n \n \n \n Docker Images vs Containers\n Docker Images vs Containers\n 21:36\n \n\n\n \n Docker Images vs Containers\n \n 21:36\n\n\n\n\n \n \n \n \n \n Docker Registries\n Docker Registries\n 26:32\n \n\n\n \n Docker Registries\n \n 26:32\n\n\n\n\n \n \n \n \n \n Docker Image Versions\n Docker Image Versions\n 29:38\n \n\n\n \n Docker Image Versions\n \n 29:38\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n TechWorld with Nana\n \n 1.38M subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAboutLinkedInInstagramTwitter\n \n \n \n \n \n \n\n\n \n \n 15:54\n 15:54\n \n \n \n\n\n\n \n Virtual Machines explained in 15 Mins\n by TechWorld with Nana\n \n \n\n\n\n \n \n 18:27\n 18:27\n \n \n \n\n\n\n \n Top 8 Docker Best Practices for using Docker in Production\n by TechWorld with Nana\n \n \n\n\n\n\n\n \n\n\n Show less","query":"docker crash course","source":"youtube"}
|
| 8 |
+
{"title":"Docker Crash Course - For Absolute Beginners","url":"https://www.youtube.com/watch?v=XQNv0SRB0OM&pp=ygUTZG9ja2VyIGNyYXNoIGNvdXJzZQ%3D%3D","description":"This video today is a full Docker crash course for beginners. We cover images, containers, volumes and even Docker Compose.\n\nCode: https://github.com/NeuralNine/youtube...\n\n◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾\n📚 Programming Books & Merch 📚\n🐍 The Python Bible Book: https://www.neuralnine.com/books/\n💻 The Algorithm Bible Book: https://www.neuralnine.com/books/\n👕 Programming Merch: https://www.neuralnine.com/shop\n\n💼 Services 💼\n💻 Freelancing & Tutoring: https://www.neuralnine.com/services\n\n🌐 Social Media & Contact 🌐 \n📱 Website: https://www.neuralnine.com/\n📷 Instagram: / neuralnine \n🐦 Twitter: / neuralnine \n🤵 LinkedIn: / neuralnine \n📁 GitHub: https://github.com/NeuralNine \n🎙 Discord: / discord \n\nTimestamps:\n(0:00) Intro\n(1:23) What is Docker?\n(4:47) Containers VS Virtual Machines\n(5:42) Installation\n(8:04) Docker Hub\n(10:48) Images & Containers\n(21:18) Volumes\n(25:53) Dockerfiles & Flask Example\n(36:15) Docker Compose\n(47:28) Pushing to Docker Hub\n(48:21) OutroThis video today is a full Docker crash course for beginners. We cover images, containers, volumes and even Docker Compose.\n …...more\n...more\n\n\n\n \n\n\n\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Intro\n Intro\n 0:00\n \n\n\n \n Intro\n \n 0:00\n\n\n\n\n \n \n \n \n \n What is Docker?\n What is Docker?\n 1:23\n \n\n\n \n What is Docker?\n \n 1:23\n\n\n\n\n \n \n \n \n \n Containers VS Virtual Machines\n Containers VS Virtual Machines\n 4:47\n \n\n\n \n Containers VS Virtual Machines\n \n 4:47\n\n\n\n\n \n \n \n \n \n Installation\n Installation\n 5:42\n \n\n\n \n Installation\n \n 5:42\n\n\n\n\n \n \n \n \n \n Docker Hub\n Docker Hub\n 8:04\n \n\n\n \n Docker Hub\n \n 8:04\n\n\n\n\n \n \n \n \n \n Images & Containers\n Images & Containers\n 10:48\n \n\n\n \n Images & Containers\n \n 10:48\n\n\n\n\n \n \n \n \n \n Volumes\n Volumes\n 21:18\n \n\n\n \n Volumes\n \n 21:18\n\n\n\n\n \n \n \n \n \n Dockerfiles & Flask Example\n Dockerfiles & Flask Example\n 25:53\n \n\n\n \n Dockerfiles & Flask Example\n \n 25:53\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n \n \n \n \n Music\n 1 songs\n \n \n \n \n \n \n \n\n\n \n \n\n\n \n \n \n \n \n \n \n Book The Rental Wit ItRAGEBook The Rental Wit It\n \n \n \n \n \n \n \n\nMusic\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n NeuralNine\n \n 450K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAboutTwitterInstagramPatreon\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"docker crash course","source":"youtube"}
|
| 9 |
+
{"title":"Docker Crash Course for Developers | Hands on Examples","url":"https://www.youtube.com/watch?v=RHjXPN_h1YA&pp=ygUTZG9ja2VyIGNyYXNoIGNvdXJzZQ%3D%3D","description":"In this video, CJ shows hands on examples of when and where you can start using docker in your apps. He shows usage of the docker CLI, Docker Desktop, docker compose and shows how to create a custom image with a Dockerfile. To demonstrate these tools, he shows how to start a redis and postgres database using the official docker images. He also shows how to spin up these containers with docker compose, and finally shows how to containerize an existing application.\n\nFollow along with the first example here: https://github.com/w3cj/pokemon-cacher\nFollow along with the second example here: https://github.com/w3cj/backpack-debu...\n\n00:00 Intro\n00:13 Docker Desktop Alternatives\n00:50 How do we run a redis database locally?\n01:39 Simple Example API Setup\n02:30 Run an image with the docker CLI\n06:21 docker CLI commands\n08:50 A simple docker compose example\n11:45 More complex example API setup\n15:41 docker compose example with postgres\n17:48 Persisting data with docker volumes\n21:51 The case for custom images and containerization\n23:32 Creating a custom image with a Dockerfile\n30:02 Ignoring files with .dockerignore\n31:03 Examine container files and execute commands with Docker Desktop\n32:31 Run migrations and seeds from within the container\n33:58 Make code changes without re-building image\n34:55 Create a volume for node_modules to prevent internalBinding errors\n36:35 Better development workflow with devcontainers\n38:03 Optimize docker layer order for faster builds\n40:09 Create a multi-stage Dockerfile for dev, build and prod\n46:23 Share docker-compose.yml configurations\n48:41 Thanks!\n\nOverview of Docker Desktop: https://docs.docker.com/desktop/\n\nDocker Desktop license agreement: https://docs.docker.com/subscription/...\n\ndocker CLI reference: https://docs.docker.com/reference/cli...\n\ndocker container CLI reference: https://docs.docker.com/reference/cli...\n\ndocker exec reference: https://docs.docker.com/reference/cli...\n\ndocker compose CLI reference: https://docs.docker.com/compose/refer...\n\ndocker-compose.yml file reference: https://docs.docker.com/compose/compo...\n\nextending your docker compose file: https://docs.docker.com/compose/multi...\n\n.dockerignore file reference: https://docs.docker.com/build/buildin...\n\nDockerfile reference: https://docs.docker.com/reference/doc...\n\nMulti-stage builds reference: https://docs.docker.com/build/buildin...\n\nDocker layers reference: https://docs.docker.com/build/guide/l...\n\nDevcontainers: https://code.visualstudio.com/docs/de...\n\nHow to use the node.js image: https://github.com/nodejs/docker-node...\n\nDocker hub images used in this video:\n\nRedis: https://hub.docker.com/_/redis\n\nPostgres: https://hub.docker.com/_/postgres/\n\nnode: https://hub.docker.com/_/node/\n\nDocker desktop alternatives:\n\nPodman Desktop: https://podman-desktop.io/\n\nRancher Desktop: https://rancherdesktop.io/\n\nOrbstack: https://orbstack.dev/\n\nColima: https://github.com/abiosoft/colima\n\nRedis links:\n\nInstall redis: https://redis.io/docs/install/install...\n\nRedis cloud: https://redis.com/cloud/pricing/\n\n------------------------------------------------------------------------------\n\nHit us up on Socials!\n\nhttps://www.syntax.fm/links\n\nBrought to you by Sentry - Use code \"tastytreats\" to get 2 months free - https://sentry.io/syntax\n\n#docker #webdevelopment #guideIn this video, CJ shows hands on examples of when and where you can start using docker in your apps. He shows usage of the docker CLI, Docker Desktop, docker compose and shows how to create a custom image with a Dockerfile. To demonstrate these tools, he shows how to start a redis and postgres database using the official docker images. He al …...more\n...more\n\n\n\n \n\n\n\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Intro\n Intro\n 0:00\n \n\n\n \n Intro\n \n 0:00\n\n\n\n\n \n \n \n \n \n Docker Desktop Alternatives\n Docker Desktop Alternatives\n 0:13\n \n\n\n \n Docker Desktop Alternatives\n \n 0:13\n\n\n\n\n \n \n \n \n \n How do we run a redis database locally?\n How do we run a redis database locally?\n 0:50\n \n\n\n \n How do we run a redis database locally?\n \n 0:50\n\n\n\n\n \n \n \n \n \n Simple Example API Setup\n Simple Example API Setup\n 1:39\n \n\n\n \n Simple Example API Setup\n \n 1:39\n\n\n\n\n \n \n \n \n \n Run an image with the docker CLI\n Run an image with the docker CLI\n 2:30\n \n\n\n \n Run an image with the docker CLI\n \n 2:30\n\n\n\n\n \n \n \n \n \n docker CLI commands\n docker CLI commands\n 6:21\n \n\n\n \n docker CLI commands\n \n 6:21\n\n\n\n\n \n \n \n \n \n A simple docker compose example\n A simple docker compose example\n 8:50\n \n\n\n \n A simple docker compose example\n \n 8:50\n\n\n\n\n \n \n \n \n \n More complex example API setup\n More complex example API setup\n 11:45\n \n\n\n \n More complex example API setup\n \n 11:45\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n \n \n \n \n Music\n 1 songs\n \n \n \n \n \n \n \n\n\n \n \n\n\n \n \n \n \n \n \n \n In the AtmosphereBad SnacksIn the Atmosphere\n \n \n \n \n \n \n \n\nMusic\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n Syntax\n \n 452K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAboutInstagramTwitter\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"docker crash course","source":"youtube"}
|
| 10 |
+
{"title":"Dokploy is my absolute favorite way to deploy to a VPS in 2025","url":"https://www.youtube.com/watch?v=ELkPcuO5ebo&pp=ygUYZG9rcGxveSBkZXBsb3ltZW50IGd1aWRl","description":"To get your own VPS instance for use with Dokploy - visit https://hostinger.com/dreamsofcode and make sure to use my coupon code DREAMSOFCODE for an additional 10% discount.\n\nI didn't think I'd like a PaaS as much as this...\n\nLinks:\nDokploy: https://dokploy.com\nHostinger: https://hostinger.com/dreamsofcode\nGuestbook V2: https://guestbook.zenvps.xyz\nGuestbook V2 Source Code: https://github.com/dreamsofcode-io/gu...\nTailscale Video: • Securing a VPS using the \"Belt and Braces\"... \n\nWant to learn how to build CLI apps in Go? Get my course on earlybird discount until August: https://dreamsofcode.io/courses/cli-a... 👈\n\nMy Gear:\nCamera: https://amzn.to/3E3ORuX\nMicrophone: https://amzn.to/40wHBPP\nAudio Interface: https://amzn.to/4jwbd8o\nHeadphones: https://amzn.to/4gasmla\nKeyboard: ZSA Voyager\n\nJoin this channel to get access to perks:\n / @dreamsofcode \n\nJoin Discord: / discord \nJoin Twitter: / dreamsofcode_io \n\n00:00:00 Intro\n00:04:17 VPS with Hostinger\n00:05:39 VPS setup\n00:07:14 Installing Dokploy\n00:09:17 HTTPS\n00:11:34 Creating a Project\n00:12:12 Database\n00:12:54 Database Setup\n00:14:30 Setting up our application\n00:15:13 Access Repo\n00:16:53 Application configuration\n00:19:36 Application Deployment\n00:21:03 Marker 10\n00:21:49 Review AppsTo get your own VPS instance for use with Dokploy - visit https://hostinger.com/dreamsofcode and make sure to use my coupon code DREAMSOFCODE for an additional 10% discount.\n …...more\n...more\n\n\n\n \n\n\nHow this was madeAuto-dubbedAudio tracks for some languages were automatically generated. Learn more\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Intro\n Intro\n 0:00\n \n\n\n \n Intro\n \n 0:00\n\n\n\n\n \n \n \n \n \n VPS with Hostinger\n VPS with Hostinger\n 4:17\n \n\n\n \n VPS with Hostinger\n \n 4:17\n\n\n\n\n \n \n \n \n \n VPS setup\n VPS setup\n 5:39\n \n\n\n \n VPS setup\n \n 5:39\n\n\n\n\n \n \n \n \n \n Installing Dokploy\n Installing Dokploy\n 7:14\n \n\n\n \n Installing Dokploy\n \n 7:14\n\n\n\n\n \n \n \n \n \n HTTPS\n HTTPS\n 9:17\n \n\n\n \n HTTPS\n \n 9:17\n\n\n\n\n \n \n \n \n \n Creating a Project\n Creating a Project\n 11:34\n \n\n\n \n Creating a Project\n \n 11:34\n\n\n\n\n \n \n \n \n \n Database\n Database\n 12:12\n \n\n\n \n Database\n \n 12:12\n\n\n\n\n \n \n \n \n \n Database Setup\n Database Setup\n 12:54\n \n\n\n \n Database Setup\n \n 12:54\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n Dreams of Code\n \n 195K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAboutDiscord ServerTwitter\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"dokploy deployment guide","source":"youtube"}
|
| 11 |
+
{"title":"The Ultimate Guide to Dokploy: Installation & Deployment Mastered","url":"https://www.youtube.com/watch?v=wkmavq02Tk0&pp=ygUYZG9rcGxveSBkZXBsb3ltZW50IGd1aWRl","description":"Welcome to my step-by-step tutorial on Dokploy! In this video, I'll guide you through everything you need to know to get started with Dokploy, a powerful deployment tool that simplifies your application deployment process.\n\nLinks & Resources:\nOfficial Dokploy Page\nhttps://dokploy.com/\nHetzner Cloud Server (get €20 in cloud credits)\nhttps://hetzner.cloud/?ref=oKOoWuPZq6o7Welcome to my step-by-step tutorial on Dokploy! In this video, I'll guide you through everything you need to know to get started with Dokploy, a powerful deployment tool that simplifies your application deployment process.\n …...more\n...more\n\n\n\n \n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n Daniel\n \n 534 subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAbout\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"dokploy deployment guide","source":"youtube"}
|
| 12 |
+
{"title":"Host Your Own Apps Like a Pro! Dokploy Setup + VPS Security Guide","url":"https://www.youtube.com/watch?v=EaOvNN-RJgI&pp=ygUYZG9rcGxveSBkZXBsb3ltZW50IGd1aWRl","description":"In this first episode of the Host Your Own Apps with Dokploy series, we’re setting up a secure VPS from scratch and deploying Dokploy — a self-hosted PaaS alternative that makes app hosting simple and powerful.\n\nYou’ll learn how to:\n✅ Create and secure a VPS server (SSH, sudo user, disable root login)\n✅ Protect your server with CrowdSec\n✅ Install and configure Dokploy\n✅ Enable HTTPS for your hosted apps\n\nPerfect for developers who want full control, better security, and no monthly fees for managed hosting.\n\n📘 Commands & Resources: https://www.bitdoze.com/dokploy-install/\n💬 Episode 2 → Coming Soon: Deploy your first app on Dokploy!\n\nGet 20 Euros on Hetzner: \nhttps://go.bitdoze.com/hetznerIn this first episode of the Host Your Own Apps with Dokploy series, we’re setting up a secure VPS from scratch and deploying Dokploy — a self-hosted PaaS alternative that makes app hosting simple and powerful.\n …...more\n...more\n\n\n\n \n\n\nHow this was madeAuto-dubbedAudio tracks for some languages were automatically generated. Learn more\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n WEBdoze\n \n 5.5K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAbout\n \n \n \n \n \n \n\n\n \n \n 17:10\n 17:10\n \n \n \n\n\n\n \n Hetzner's NEW €3.49 VPS: Too Cheap to Be True? (Performance Test)\n by WEBdoze\n \n \n\n\n\n \n \n 23:55\n 23:55\n \n \n \n\n\n\n \n Hetzner Review: Where Reliability Meets Ridiculous Affordability!\n by WEBdoze\n \n \n\n\n\n \n \n 25:46\n 25:46\n \n \n \n\n\n\n \n CrowdSec: Your VPS's New Best Friend Against Cyber Threats\n by WEBdoze\n \n \n\n\n\n\n\n \n\n\n Show less","query":"dokploy deployment guide","source":"youtube"}
|
| 13 |
+
{"title":"Odoo Beginners Tutorial","url":"https://www.youtube.com/watch?v=QuC6rc2q2mg&pp=ygUQb2RvbyAxNyB0dXRvcmlhbA%3D%3D","description":"Work With Me - https://odooityourself.com/meet-with-me\nJoin One of My Classes - https://odooityourself.com/slides/odo...\nGet Weekly Tips and Tricks - https://odooityourself.com/odiy-dispatch\n\nWanting to get started with Odoo ERP and not sure where to begin? Follow me through this beginners tutorial to get your own database started up so you can check it out for yourself!\n\n00:00 - Setting Up an Odoo Demo\n02:21 - The Structure of Odoo\n03:13 - Reporting and Dashboarding\n04:13 - Adding Demo Data\n04:47 - Views\n06:53 - Filters and Group Bys\n08:05 - Adding Users\n09:11 - Odoo Editions\n09:48 - Odoo Hosting Options\n10:39 - Other ConsiderationsWork With Me - https://odooityourself.com/meet-with-me\nJoin One of My Classes - https://odooityourself.com/slides/odo...\nGet Weekly Tips and Tricks - https://odooityourself.com/odiy-dispatch …...more\n...more\n\n\n\n \n\n\nHow this was madeAuto-dubbedAudio tracks for some languages were automatically generated. Learn more\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Setting Up an Odoo Demo\n Setting Up an Odoo Demo\n 0:00\n \n\n\n \n Setting Up an Odoo Demo\n \n 0:00\n\n\n\n\n \n \n \n \n \n The Structure of Odoo\n The Structure of Odoo\n 2:21\n \n\n\n \n The Structure of Odoo\n \n 2:21\n\n\n\n\n \n \n \n \n \n Reporting and Dashboarding\n Reporting and Dashboarding\n 3:13\n \n\n\n \n Reporting and Dashboarding\n \n 3:13\n\n\n\n\n \n \n \n \n \n Adding Demo Data\n Adding Demo Data\n 4:13\n \n\n\n \n Adding Demo Data\n \n 4:13\n\n\n\n\n \n \n \n \n \n Views\n Views\n 4:47\n \n\n\n \n Views\n \n 4:47\n\n\n\n\n \n \n \n \n \n Filters and Group Bys\n Filters and Group Bys\n 6:53\n \n\n\n \n Filters and Group Bys\n \n 6:53\n\n\n\n\n \n \n \n \n \n Adding Users\n Adding Users\n 8:05\n \n\n\n \n Adding Users\n \n 8:05\n\n\n\n\n \n \n \n \n \n Odoo Editions\n Odoo Editions\n 9:11\n \n\n\n \n Odoo Editions\n \n 9:11\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n ODOO IT YOURSELF\n \n 13.6K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAbout\n \n \n \n \n \n \n\n\n \n \n 18:28\n 18:28\n \n \n \n\n\n\n \n Understand Relational Databases to Understand ODOO\n by ODOO IT YOURSELF\n \n \n\n\n\n \n \n 17:00\n 17:00\n \n \n \n\n\n\n \n Create Your Own Dashboard in ODOO 17 Today!!\n by ODOO IT YOURSELF\n \n \n\n\n\n \n \n 7:57\n 7:57\n \n \n \n\n\n\n \n Mastering ODOO Form Views: A Step-by-Step Guide (Part 2)\n by ODOO IT YOURSELF\n \n \n\n\n\n \n \n \n 14\n \n14\n\n \n\n\n Evaluating ODOO as an ERP and Implementation Tutorial #odoo #erp #projectmanagement\n by ODOO IT YOURSELF\n \n\n\n\n\n\n \n\n\n Show less","query":"odoo 17 tutorial","source":"youtube"}
|
| 14 |
+
{"title":"Odoo 17 Development Environment Setup on Windows","url":"https://www.youtube.com/watch?v=jJXZcqiJG4Y&pp=ygUQb2RvbyAxNyB0dXRvcmlhbNIHCQlNCgGHKiGM7w%3D%3D","description":"00:00 Intro\n00:32 Installing Git and cloning Odoo 17 repository\n02:05 Installing Python\n02:48 PostgreSQL installation, environment variable and user setup\n05:00 C++ built tools installation\n05:55 Installing an IDE(Vscode)\n06:20 Virtual Environment Setup (Venv)\n08:15 Venv not activating fix (Execution policy)\n09:10 Downloading python packages\n10:35 Running Odoo\n13:30 wkhtmltopdf installation and setup\n\nTutorial on how to setup a development environment for Odoo 17 community version.\nThe tutorial is following Odoo's official documentation on Source install available at:\nhttps://www.odoo.com/documentation/17...\n\nOdoo 17 source installation\nPython installation\nPostgreSQL Installation and user creation \nVirtual environment setup using venv\nPython packages installation\nwkhtmltopdf installation to generate pdf files\nRunning the odoo instance\n\nSupport the channel:\nhttps://buymeacoffee.com/opensourcehu...\n\nMy website:\nhttps://opensourcehustle.com/00:00 Intro\n00:32 Installing Git and cloning Odoo 17 repository\n02:05 Installing Python …...more\n...more\n\n\n\n \n\n\n\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Intro\n Intro\n 0:00\n \n\n\n \n Intro\n \n 0:00\n\n\n\n\n \n \n \n \n \n Installing Git and cloning Odoo 17 repository\n Installing Git and cloning Odoo 17 repository\n 0:32\n \n\n\n \n Installing Git and cloning Odoo 17 repository\n \n 0:32\n\n\n\n\n \n \n \n \n \n Installing Python\n Installing Python\n 2:05\n \n\n\n \n Installing Python\n \n 2:05\n\n\n\n\n \n \n \n \n \n PostgreSQL installation, environment variable and user setup\n PostgreSQL installation, environment variable and user setup\n 2:48\n \n\n\n \n PostgreSQL installation, environment variable and user setup\n \n 2:48\n\n\n\n\n \n \n \n \n \n C++ built tools installation\n C++ built tools installation\n 5:00\n \n\n\n \n C++ built tools installation\n \n 5:00\n\n\n\n\n \n \n \n \n \n Installing an IDE(Vscode)\n Installing an IDE(Vscode)\n 5:55\n \n\n\n \n Installing an IDE(Vscode)\n \n 5:55\n\n\n\n\n \n \n \n \n \n Virtual Environment Setup (Venv)\n Virtual Environment Setup (Venv)\n 6:20\n \n\n\n \n Virtual Environment Setup (Venv)\n \n 6:20\n\n\n\n\n \n \n \n \n \n Venv not activating fix (Execution policy)\n Venv not activating fix (Execution policy)\n 8:15\n \n\n\n \n Venv not activating fix (Execution policy)\n \n 8:15\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n Open Source Hustle\n \n 355 subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAbout\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"odoo 17 tutorial","source":"youtube"}
|
| 15 |
+
{"title":"Create Your Own Dashboard in ODOO 17 Today!!","url":"https://www.youtube.com/watch?v=lCTay0uIPU4&pp=ygUQb2RvbyAxNyB0dXRvcmlhbA%3D%3D","description":"Work With Me - https://odooityourself.com/meet-with-me\nJoin One of My Classes - https://odooityourself.com/slides/odo...\nGet Weekly Tips and Tricks - https://odooityourself.com/odiy-dispatch\n\nWant to build a dashboard in ODOO? ODOO brings all of your data together and one of the best ways to showcase and leverage that data is with a dashboard. The team at ODOO have built out a lot of these for you already, but if you want to create your own or want to customize and existing one, you've come to the right place!\n\nWe'll walk you through how to add data to your dashboard, create top ten lists, drop in scorecards, and add charts.\n\n#odoo #odooerp #erp #erpsoftwareWork With Me - https://odooityourself.com/meet-with-me\nJoin One of My Classes - https://odooityourself.com/slides/odo...\nGet Weekly Tips and Tricks - https://odooityourself.com/odiy-dispatch …...more\n...more\n\n\n\n \n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n ODOO IT YOURSELF\n \n 13.6K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAbout\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"odoo 17 tutorial","source":"youtube"}
|
| 16 |
+
{"title":"How to Monitor EVERYTHING in your HomeLab for free - Zabbix Overview","url":"https://www.youtube.com/watch?v=R_EQzBkz4sE&t=453s&pp=ygUaemFiYml4IG1vbml0b3JpbmcgdHV0b3JpYWw%3D","description":"This tutorial goes over the basics on how to use Zabbix for monitoring devices in a HomeLab / Business workflow.\n\n#HomeLab #zabbix #selfhosted\n\nHire Me! https://www.spacerex.co/hire-me/\nSupport the Channel & Get Early Access to ALL Videos: / spacerexwill \nPost on the forums: https://forums.spacerex.co/\n\n\nMore HomeLab Content:\nWhat do I self host: • What is on my Home Servers? - Virtualizati... \n\n\nWhat to buy*:\nVery powerful Synology for Self Hosting: https://amzn.to/413DwRN\nStand alone small Server: https://amzn.to/3ZLzlsC\n\nDesk accessories (desk pad, keyboard stand, wrist rest)*: https://bit.ly/3qRKix8 , discount code SPACEREX for 10% off\n\nTOC\n00:00 Introduction\n1:58 What this video will cover\n2:39 What is Zabbix and most useful features\n07:33 What can you monitor with Zabbix\n10:46 Templates overview\n16:27 Monitoring setup walkthrough using templates\n21:13 Installing Zabbix agent\n28:04 Conclusion\n\n\n*These are affiliate links, which means that if you purchase a product through one of them, I will receive a small commission (at no additional cost to you). Thank you for supporting my channel!This tutorial goes over the basics on how to use Zabbix for monitoring devices in a HomeLab / Business workflow.\n\n#HomeLab #zabbix #selfhosted …...more\n...more\n\n\n\n \n\n\nHow this was madeAuto-dubbedAudio tracks for some languages were automatically generated. Learn more\n\n \n \n \n \n Chapters\n \n \n \n \n \n \n \n View all\n\n\n \n \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n Introduction\n Introduction\n 0:00\n \n\n\n \n Introduction\n \n 0:00\n\n\n\n\n \n \n \n \n \n What this video will cover\n What this video will cover\n 1:58\n \n\n\n \n What this video will cover\n \n 1:58\n\n\n\n\n \n \n \n \n \n What is Zabbix and most useful features\n What is Zabbix and most useful features\n 2:39\n \n\n\n \n What is Zabbix and most useful features\n \n 2:39\n\n\n\n\n \n \n \n \n \n What can you monitor with Zabbix\n What can you monitor with Zabbix\n 7:33\n \n\n\n \n What can you monitor with Zabbix\n \n 7:33\n\n\n\n\n \n \n \n \n \n Templates overview\n Templates overview\n 10:46\n \n\n\n \n Templates overview\n \n 10:46\n\n\n\n\n \n \n \n \n \n Monitoring setup walkthrough using templates\n Monitoring setup walkthrough using templates\n 16:27\n \n\n\n \n Monitoring setup walkthrough using templates\n \n 16:27\n\n\n\n\n \n \n \n \n \n Installing Zabbix agent\n Installing Zabbix agent\n 21:13\n \n\n\n \n Installing Zabbix agent\n \n 21:13\n\n\n\n\n \n \n \n \n \n Conclusion\n Conclusion\n 28:04\n \n\n\n \n Conclusion\n \n 28:04\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n Transcript\n\n\n Follow along using the transcript.\n\n\n Show transcript\n\n\n\n \n \n SpaceRex\n \n 136K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAbout\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"zabbix monitoring tutorial","source":"youtube"}
|
| 17 |
+
{"title":"Zabbix Basic Concepts","url":"https://www.youtube.com/watch?v=7inJAmqyc0g&pp=ygUaemFiYml4IG1vbml0b3JpbmcgdHV0b3JpYWw%3D","description":"In this video we will take a look at key Zabbix concepts, such as Hosts, Items, Triggers and Actions - and how they can be used to start monitoring your IT infrastructure, analyze the collected data and alert you when the data does reach your problem thresholds.\n\n#zabbix seriesIn this video we will take a look at key Zabbix concepts, such as Hosts, Items, Triggers and Actions - and how they can be used to start monitoring your IT infrastructure, analyze the collected data and alert you when the data does reach your problem thresholds. …...more\n...more\n\n\n\n \n\n\n\n \n \n Zabbix\n \n 24K subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAboutFollow us on TwitterFacebookLinkedIn\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"zabbix monitoring tutorial","source":"youtube"}
|
| 18 |
+
{"title":"How to Install Zabbix on Ubuntu | Step-by-Step Guide","url":"https://www.youtube.com/watch?v=FWw-6gi6e5o&pp=ygUaemFiYml4IG1vbml0b3JpbmcgdHV0b3JpYWw%3D","description":"🚀 How to Install Zabbix on Ubuntu | Step-by-Step Guide\n\nIn this video, I’ll show you how to install and configure Zabbix on Ubuntu from start to finish. Whether you’re new to server monitoring or looking for a reliable open-source tool, Zabbix is one of the best solutions for monitoring servers, networks, and applications.\n\n🔹 What you’ll learn in this video:\n\nInstalling Zabbix on Ubuntu (step by step)\n\nConfiguring MySQL/MariaDB and Apache for Zabbix\n\nAccessing the Zabbix web interface\n\nFirst-time setup and login\n\nBy the end of this tutorial, you’ll have a fully working Zabbix monitoring system running on your Ubuntu server.\n\n✅ Perfect for beginners and system admins who want a powerful monitoring solution!\n\n✨ Useful Links\n🔗 Official Zabbix Docs: https://www.zabbix.com/documentation\n🔗 Ubuntu Server Download: https://ubuntu.com/download/server\n\n💬 If you found this video helpful, don’t forget to Like 👍, Subscribe 🔔, and Share to support the channel!\n\n👉 Keywords for YouTube SEO (don’t show in video but add in tags/description bottom):\nzabbix, zabbix ubuntu, install zabbix, zabbix tutorial, zabbix server, ubuntu monitoring, linux monitoring, how to install zabbix on ubuntu, zabbix setup🚀 How to Install Zabbix on Ubuntu | Step-by-Step Guide\n\nIn this video, I’ll show you how to install and configure Zabbix on Ubuntu from start to finish. Whether you’re new to se …...more\n...more\n\n\n\n \n\n\n\n \n \n TechWithFarshad\n \n 745 subscribers\n \n \n\n\n Videos\n About\n\n\n \n \n \n \n \n \n VideosAboutFacebookInstagramTwitter\n \n \n \n \n \n \n\n\n\n\n \n\n\n Show less","query":"zabbix monitoring tutorial","source":"youtube"}
|