File size: 22,853 Bytes
4484c0f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{"instruction": "What's wrong with this n8n workflow that keeps running forever?", "output": "**Anti-Pattern: Infinite Loop Detection**\n\nThe workflow creates an infinite loop because a webhook triggers itself:\n\n```\nWebhook β†’ HTTP Request (calls same webhook URL) β†’ Webhook triggers again\n```\n\n**Why It's Bad:**\n- Consumes all available executions\n- Can crash the n8n instance\n- May exceed API rate limits\n- Uses memory until system fails\n\n**How to Fix:**\n1. Add a loop counter in workflow data\n2. Use IF node to check counter before HTTP call\n3. Add unique execution ID to prevent re-triggering\n4. Use a Set node to track processed items\n\n```javascript\n// In Code node - add loop protection\nconst MAX_ITERATIONS = 10;\nconst currentCount = $input.first().json.loopCount || 0;\n\nif (currentCount >= MAX_ITERATIONS) {\n  throw new Error('Loop limit reached');\n}\n\nreturn { loopCount: currentCount + 1 };\n```", "metadata": {"topic": "anti_pattern", "type": "infinite_loop", "severity": "critical"}}
{"instruction": "Why is my n8n workflow using too much memory?", "output": "**Anti-Pattern: Memory Leak in Workflow**\n\nCommon causes of memory issues in n8n:\n\n**1. Loading Entire Datasets**\n```javascript\n// BAD: Loads everything into memory\nconst allRecords = await getAll(); // 1 million rows\n```\n\n**Solution: Use pagination**\n```javascript\n// GOOD: Process in batches\nconst batchSize = 100;\nfor (let offset = 0; ; offset += batchSize) {\n  const batch = await getPage(offset, batchSize);\n  if (batch.length === 0) break;\n  // Process batch\n}\n```\n\n**2. Not Cleaning Up Binary Data**\n```javascript\n// BAD: Retaining binary data through workflow\nreturn items; // Carries all attachments forward\n```\n\n**Solution: Drop binary when not needed**\n```javascript\n// GOOD: Remove binary data after processing\nreturn items.map(item => ({\n  json: item.json,\n  binary: {} // Clear binary\n}));\n```\n\n**3. Using Item Lists node without limits**\n- Always set a maximum limit\n- Use Split In Batches to process chunks\n- Enable 'Execute Once' when possible", "metadata": {"topic": "anti_pattern", "type": "memory_leak", "severity": "high"}}
{"instruction": "What security mistakes should I avoid in n8n workflows?", "output": "**Security Anti-Patterns in n8n**\n\n**1. Hardcoded Credentials (CRITICAL)**\n```javascript\n// NEVER DO THIS\nconst apiKey = \"sk-live-abc123def456\";\n```\n\n**Fix: Use n8n Credentials**\n- Create credential in n8n UI\n- Reference via credential node parameter\n- Credentials are encrypted at rest\n\n**2. Exposing Sensitive Data in Logs**\n```javascript\n// BAD: Logs contain secrets\nconsole.log('Request:', JSON.stringify(requestWithApiKey));\n```\n\n**Fix: Sanitize before logging**\n```javascript\nconst sanitized = { ...request, apiKey: '[REDACTED]' };\nconsole.log('Request:', JSON.stringify(sanitized));\n```\n\n**3. No Webhook Authentication**\n```javascript\n// BAD: Public webhook anyone can call\n// Webhook β†’ Process sensitive data\n```\n\n**Fix: Add authentication**\n- Enable 'Authentication' in webhook node\n- Use Header Auth with secret key\n- Validate IP whitelist in Code node\n\n**4. SQL Injection Vulnerability**\n```javascript\n// BAD: Direct string interpolation\nconst query = `SELECT * FROM users WHERE id = ${userId}`;\n```\n\n**Fix: Use parameterized queries**\n```javascript\nconst query = 'SELECT * FROM users WHERE id = $1';\nconst params = [userId];\n```", "metadata": {"topic": "anti_pattern", "type": "security", "severity": "critical"}}
{"instruction": "How do I prevent API abuse in my n8n workflows?", "output": "**Preventing API Abuse in n8n**\n\n**1. Rate Limiting Pattern**\n```javascript\n// Add rate limiting in Code node\nconst RATE_LIMIT = 10; // requests per minute\nconst cacheKey = `rate_limit_${$input.first().json.clientId}`;\n\n// Check Redis/memory for rate count\nconst currentCount = await getFromCache(cacheKey) || 0;\nif (currentCount >= RATE_LIMIT) {\n  throw new Error('Rate limit exceeded. Try again in 1 minute.');\n}\n\nawait setInCache(cacheKey, currentCount + 1, 60); // TTL 60 seconds\n```\n\n**2. Input Validation**\n```javascript\n// Validate all webhook inputs\nconst { email, amount } = $input.first().json;\n\nif (!email || !email.includes('@')) {\n  throw new Error('Invalid email format');\n}\n\nif (typeof amount !== 'number' || amount < 0 || amount > 10000) {\n  throw new Error('Invalid amount: must be 0-10000');\n}\n```\n\n**3. Request Size Limits**\n- Set max body size in n8n config\n- Validate file sizes before processing\n- Reject oversized payloads early\n\n**4. Timeout Protection**\n- Set execution timeout in workflow settings\n- Add timeout to HTTP Request nodes\n- Use circuit breaker pattern for external APIs", "metadata": {"topic": "anti_pattern", "type": "api_abuse", "severity": "high"}}
{"instruction": "What performance mistakes slow down n8n workflows?", "output": "**Performance Anti-Patterns in n8n**\n\n**1. Synchronous Processing of Large Lists**\n```javascript\n// BAD: Processes 10,000 items sequentially\nfor (const item of items) {\n  await processItem(item); // Waits for each\n}\n```\n\n**Fix: Use batching and parallelism**\n- Split In Batches node (set batch size 50-100)\n- Enable 'Continue On Fail' to not block on errors\n\n**2. No Caching for Repeated Lookups**\n```javascript\n// BAD: Fetches same config for every item\nfor (const item of items) {\n  const config = await fetchConfig(); // Same every time!\n}\n```\n\n**Fix: Cache common data**\n```javascript\n// Fetch once, reuse\nconst config = await fetchConfig();\nreturn items.map(item => ({\n  ...item,\n  config\n}));\n```\n\n**3. Fetching Unnecessary Fields**\n```javascript\n// BAD: Gets entire object when only need ID\nconst users = await getUsers(); // Returns 50 fields each\nconst ids = users.map(u => u.id); // Only using 1 field\n```\n\n**Fix: Use API field selection**\n```javascript\nconst users = await getUsers({ fields: 'id' });\n```\n\n**4. Not Using Pagination**\n```javascript\n// BAD: Tries to load all records\nconst allOrders = await api.getOrders(); // 1 million records\n```\n\n**Fix: Stream with pagination**\n```javascript\nlet page = 1;\nwhile (true) {\n  const orders = await api.getOrders({ page, limit: 100 });\n  if (!orders.length) break;\n  yield orders; // Process page\n  page++;\n}\n```", "metadata": {"topic": "anti_pattern", "type": "performance", "severity": "medium"}}
{"instruction": "What architecture mistakes should I avoid when building n8n workflows?", "output": "**Architecture Anti-Patterns in n8n**\n\n**1. Monolithic Mega-Workflows**\n- Single workflow with 100+ nodes\n- Hard to debug, maintain, or modify\n- One failure crashes everything\n\n**Fix: Decompose into sub-workflows**\n```\nMain Workflow\n  β†’ Execute Workflow: Fetch Data\n  β†’ Execute Workflow: Transform Data\n  β†’ Execute Workflow: Send Notifications\n```\n\n**2. Tight Coupling to External Services**\n```javascript\n// BAD: Direct calls everywhere\nawait sendgrid.send(email);\nawait sendgrid.send(email2);\nawait sendgrid.send(email3);\n```\n\n**Fix: Abstract through wrapper nodes**\n- Create reusable sub-workflow for email\n- Easy to swap providers later\n- Single place to add logging/retry\n\n**3. No Error Handling Strategy**\n```\nStart β†’ Fetch β†’ Transform β†’ Save\n// If Transform fails, entire workflow fails\n```\n\n**Fix: Add error boundaries**\n```\nStart β†’ Fetch β†’ Error Trigger attached\n                   ↓\n              Transform β†’ Error Trigger attached\n                            ↓\n                         Save β†’ Success notification\n```\n\n**4. Hardcoded Business Logic**\n```javascript\n// BAD: Magic numbers in code\nif (order.total > 500) { // Why 500?\n  applyDiscount(0.1); // Why 10%?\n}\n```\n\n**Fix: Externalize configuration**\n- Store thresholds in database\n- Use environment variables\n- Create config sub-workflow", "metadata": {"topic": "anti_pattern", "type": "architecture", "severity": "medium"}}
{"instruction": "How do I handle 'ECONNREFUSED' errors in n8n HTTP requests?", "output": "**Troubleshooting ECONNREFUSED in n8n**\n\nThis error means the target server refused the connection.\n\n**Common Causes & Fixes:**\n\n**1. Service Not Running**\n```bash\n# Check if service is running\ncurl http://localhost:3000/health\n# If fails, start the service\ndocker-compose up -d\n```\n\n**2. Incorrect URL/Port**\n- Verify the URL in HTTP Request node\n- Check port number matches service config\n- Use internal Docker network names if in containers\n\n**3. Firewall Blocking**\n```bash\n# Check firewall rules\nsudo ufw status\n# Allow the port\nsudo ufw allow 3000\n```\n\n**4. n8n in Docker Can't Reach localhost**\n```yaml\n# docker-compose.yml fix\nservices:\n  n8n:\n    extra_hosts:\n      - \"host.docker.internal:host-gateway\"\n```\nThen use `http://host.docker.internal:3000` instead of `localhost`\n\n**5. Retry Strategy**\n```javascript\n// Add retry logic in Code node\nconst maxRetries = 3;\nfor (let i = 0; i < maxRetries; i++) {\n  try {\n    return await makeRequest();\n  } catch (e) {\n    if (i === maxRetries - 1) throw e;\n    await sleep(1000 * Math.pow(2, i)); // Exponential backoff\n  }\n}\n```", "metadata": {"topic": "troubleshooting", "error_type": "connection", "severity": "high"}}
{"instruction": "How do I debug '401 Unauthorized' errors in n8n?", "output": "**Debugging 401 Unauthorized in n8n**\n\n**Step 1: Verify Credentials**\n- Go to Credentials tab\n- Edit the credential and test connection\n- Check if API key/token is expired\n\n**Step 2: Check Authentication Method**\n| API Expects | n8n Setting |\n|-------------|-------------|\n| Bearer Token | Header Auth with 'Authorization: Bearer xxx' |\n| API Key Header | Header Auth with 'X-API-Key: xxx' |\n| Basic Auth | Credential type: HTTP Basic Auth |\n| OAuth2 | Credential type: OAuth2 API |\n\n**Step 3: Verify Headers**\n```javascript\n// In HTTP Request node, check these settings:\n{\n  \"headerParameters\": {\n    \"Authorization\": \"Bearer {{$credentials.apiKey}}\"\n  }\n}\n```\n\n**Step 4: Test Outside n8n**\n```bash\n# Verify API works with curl\ncurl -H \"Authorization: Bearer YOUR_TOKEN\" \\\n  https://api.example.com/endpoint\n```\n\n**Step 5: Common Fixes**\n- Regenerate API key if expired\n- Check if using test vs production keys\n- Verify OAuth scopes include required permissions\n- Check if IP whitelist is blocking n8n server\n\n**Step 6: Debug Mode**\n```javascript\n// Add to Code node to see actual request\nconsole.log('Request headers:', $request.headers);\nconsole.log('Auth config:', $credentials);\n```", "metadata": {"topic": "troubleshooting", "error_type": "auth", "severity": "high"}}
{"instruction": "How do I handle rate limiting errors (429) in n8n?", "output": "**Handling Rate Limits (429 Too Many Requests) in n8n**\n\n**1. Enable Built-in Retry**\nIn HTTP Request node settings:\n- Enable 'Retry On Fail'\n- Set 'Max Retries': 3\n- Set 'Wait Between Retries': 1000ms\n\n**2. Implement Exponential Backoff**\n```javascript\n// Code node with smart retry\nasync function withBackoff(fn, maxRetries = 5) {\n  for (let i = 0; i < maxRetries; i++) {\n    try {\n      return await fn();\n    } catch (error) {\n      if (error.response?.status !== 429) throw error;\n      \n      // Get retry-after header or calculate\n      const retryAfter = error.response.headers['retry-after'];\n      const waitTime = retryAfter \n        ? parseInt(retryAfter) * 1000 \n        : Math.pow(2, i) * 1000;\n      \n      console.log(`Rate limited. Waiting ${waitTime}ms`);\n      await new Promise(r => setTimeout(r, waitTime));\n    }\n  }\n  throw new Error('Max retries exceeded');\n}\n```\n\n**3. Add Rate Limiting to Workflow**\n- Use 'Split In Batches' with 'Wait Between Batches'\n- Set batch time based on API limits (e.g., 1000ms for 60/min)\n\n**4. Queue-Based Processing**\n```\nWebhook β†’ Add to Queue (Redis/Database)\n\nSeparate workflow:\nSchedule (every minute) β†’ Get 50 from Queue β†’ Process β†’ Mark Complete\n```\n\n**5. Monitor and Alert**\n- Log rate limit occurrences\n- Set up alerting when rate limits spike\n- Consider upgrading API tier if consistently hitting limits", "metadata": {"topic": "troubleshooting", "error_type": "rate_limit", "severity": "medium"}}
{"instruction": "How do I debug 'undefined' or null values in n8n expressions?", "output": "**Debugging Undefined/Null Values in n8n Expressions**\n\n**1. Use Expression Editor Debug Mode**\nClick the expression field, use the built-in preview to see actual values.\n\n**2. Safe Property Access**\n```javascript\n// BAD: Crashes if customer is undefined\n{{ $json.customer.email }}\n\n// GOOD: Safe navigation\n{{ $json.customer?.email ?? 'no-email@example.com' }}\n\n// GOOD: With fallback\n{{ $json.customer?.email || 'default@email.com' }}\n```\n\n**3. Add Debug Code Node**\n```javascript\n// Insert after problematic node\nconsole.log('Full input:', JSON.stringify($input.all(), null, 2));\nconsole.log('First item json:', $input.first().json);\nconsole.log('Target field:', $input.first().json.customer);\n\nreturn $input.all();\n```\n\n**4. Check Data Flow**\n- Add 'Execution Data' view in workflow\n- Click each node to see output\n- Verify previous node is outputting expected structure\n\n**5. Common Causes**\n| Symptom | Likely Cause |\n|---------|--------------|\n| `undefined` | Field name typo or missing field |\n| `null` | API returned null value |\n| `[object Object]` | Need to access nested property |\n| Empty string | Field exists but has no value |\n\n**6. Defensive Code Pattern**\n```javascript\nconst data = $input.first().json;\n\n// Validate required fields\nconst required = ['email', 'name', 'orderId'];\nfor (const field of required) {\n  if (!data[field]) {\n    throw new Error(`Missing required field: ${field}`);\n  }\n}\n```", "metadata": {"topic": "troubleshooting", "error_type": "data_access", "severity": "medium"}}
{"instruction": "How do I handle timeout errors in n8n?", "output": "**Handling Timeout Errors in n8n**\n\n**1. Increase Node-Level Timeout**\nIn HTTP Request node:\n- Settings β†’ Timeout: 60000 (60 seconds)\n\n**2. Increase Workflow Execution Timeout**\n```bash\n# Environment variable\nEXECUTIONS_TIMEOUT=600  # 10 minutes\nEXECUTIONS_TIMEOUT_MAX=3600  # Max 1 hour\n```\n\n**3. Handle Long Operations Asynchronously**\n```\nStart β†’ Trigger long job (returns job ID)\n     β†’ Wait node (30 seconds)\n     β†’ Poll job status\n     β†’ IF complete β†’ Continue\n        ELSE β†’ Loop back to Wait\n```\n\n**4. Implement Timeout Wrapper**\n```javascript\n// Code node with timeout protection\nasync function withTimeout(promise, ms) {\n  const timeout = new Promise((_, reject) => \n    setTimeout(() => reject(new Error('Operation timed out')), ms)\n  );\n  return Promise.race([promise, timeout]);\n}\n\ntry {\n  const result = await withTimeout(\n    $http.request({ url: 'https://slow-api.com/data' }),\n    30000 // 30 second timeout\n  );\n  return { result };\n} catch (e) {\n  return { error: e.message, fallback: true };\n}\n```\n\n**5. Chunk Large Operations**\nInstead of processing 10,000 items at once:\n```\nSplit In Batches (100 items) β†’ Process β†’ Merge results\n```\n\n**6. Use Webhook Callbacks for Long Jobs**\n```\nRequest β†’ Start external job with callback URL\n         β†’ Return immediately\n\nWebhook listens β†’ Job complete callback β†’ Continue processing\n```", "metadata": {"topic": "troubleshooting", "error_type": "timeout", "severity": "medium"}}
{"instruction": "How do I build multi-tenant workflows in n8n?", "output": "**Multi-Tenant Architecture in n8n**\n\n**1. Tenant Identification**\n```javascript\n// Extract tenant from request\nconst tenantId = $input.first().json.headers['x-tenant-id']\n  || $input.first().json.subdomain\n  || 'default';\n\n// Validate tenant\nconst validTenants = await getTenantList();\nif (!validTenants.includes(tenantId)) {\n  throw new Error('Invalid tenant');\n}\n\nreturn { tenantId, ...inputData };\n```\n\n**2. Tenant-Specific Credentials**\n```javascript\n// Store credentials per tenant in database\nconst tenantConfig = await db.query(\n  'SELECT api_key, webhook_url FROM tenant_config WHERE id = $1',\n  [tenantId]\n);\n\n// Use tenant-specific settings\nreturn { \n  apiKey: tenantConfig.api_key,\n  webhookUrl: tenantConfig.webhook_url\n};\n```\n\n**3. Data Isolation Pattern**\n```javascript\n// Always filter by tenant\nconst orders = await db.query(\n  'SELECT * FROM orders WHERE tenant_id = $1 AND status = $2',\n  [tenantId, 'pending']\n);\n\n// Never return data without tenant filter\n```\n\n**4. Workflow Per Tenant (Enterprise)**\n```\nMain Router Workflow\n  β†’ Switch node on tenantId\n  β†’ Execute Workflow: tenant_a_customizations\n  β†’ Execute Workflow: tenant_b_customizations\n  β†’ Common post-processing\n```\n\n**5. Audit Logging**\n```javascript\n// Log all operations with tenant context\nawait logAudit({\n  tenantId,\n  action: 'order_created',\n  userId: $input.first().json.userId,\n  timestamp: new Date().toISOString(),\n  details: { orderId }\n});\n```", "metadata": {"topic": "enterprise", "pattern": "multi_tenant", "complexity": "advanced"}}
{"instruction": "How do I set up high-availability n8n in production?", "output": "**High-Availability n8n Architecture**\n\n**1. Multi-Instance Setup**\n```yaml\n# docker-compose-ha.yml\nservices:\n  n8n-main:\n    image: n8nio/n8n\n    environment:\n      - EXECUTIONS_MODE=queue\n      - QUEUE_BULL_REDIS_HOST=redis\n    deploy:\n      replicas: 2\n      \n  n8n-worker:\n    image: n8nio/n8n\n    command: worker\n    environment:\n      - EXECUTIONS_MODE=queue\n      - QUEUE_BULL_REDIS_HOST=redis\n    deploy:\n      replicas: 4\n      \n  redis:\n    image: redis:alpine\n    \n  postgres:\n    image: postgres:14\n```\n\n**2. Load Balancer Configuration**\n```nginx\nupstream n8n_cluster {\n    least_conn;\n    server n8n-1:5678;\n    server n8n-2:5678;\n    keepalive 32;\n}\n\nserver {\n    location / {\n        proxy_pass http://n8n_cluster;\n        proxy_http_version 1.1;\n        proxy_set_header Connection \"\";\n    }\n}\n```\n\n**3. Database High Availability**\n- Use PostgreSQL with streaming replication\n- Or managed service (AWS RDS, Google Cloud SQL)\n- Enable connection pooling (PgBouncer)\n\n**4. Redis Cluster for Queue**\n```yaml\nredis:\n  image: redis:alpine\n  command: redis-server --appendonly yes\n  volumes:\n    - redis_data:/data\n  deploy:\n    replicas: 3\n```\n\n**5. Health Checks**\n```yaml\nhealthcheck:\n  test: [\"CMD\", \"wget\", \"-q\", \"--spider\", \"http://localhost:5678/healthz\"]\n  interval: 30s\n  timeout: 10s\n  retries: 3\n```\n\n**6. Auto-Scaling**\n```yaml\n# Kubernetes HPA\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nspec:\n  minReplicas: 2\n  maxReplicas: 10\n  metrics:\n    - type: Resource\n      resource:\n        name: cpu\n        target:\n          averageUtilization: 70\n```", "metadata": {"topic": "enterprise", "pattern": "high_availability", "complexity": "enterprise"}}
{"instruction": "How do I implement SSO/SAML authentication for n8n?", "output": "**SSO/SAML Authentication for n8n**\n\n**1. SAML Configuration**\n```bash\n# Environment variables for SAML\nN8N_SSO_SAML_ENABLED=true\nN8N_SSO_SAML_METADATA_URL=https://idp.company.com/metadata.xml\nN8N_SSO_SAML_ISSUER=n8n-production\nN8N_SSO_SAML_CALLBACK_URL=https://n8n.company.com/rest/sso/saml/callback\n```\n\n**2. Okta Integration**\n```xml\n<!-- SAML Attribute Mapping -->\n<Attribute Name=\"email\" NameFormat=\"urn:oasis:names:tc:SAML:2.0:attrname-format:basic\">\n  <AttributeValue>user.email</AttributeValue>\n</Attribute>\n<Attribute Name=\"firstName\" ...>user.firstName</Attribute>\n<Attribute Name=\"lastName\" ...>user.lastName</Attribute>\n```\n\n**3. Azure AD Setup**\n1. Register n8n as Enterprise Application\n2. Configure SAML SSO with:\n   - Identifier: `n8n-production`\n   - Reply URL: `https://n8n.company.com/rest/sso/saml/callback`\n3. Download Federation Metadata XML\n4. Set `N8N_SSO_SAML_METADATA_URL` to metadata URL\n\n**4. Role Mapping**\n```javascript\n// Map SAML groups to n8n roles\nconst samlGroups = user.attributes['memberOf'];\nconst roleMapping = {\n  'n8n-admins': 'owner',\n  'n8n-editors': 'admin',\n  'n8n-viewers': 'member'\n};\n\nconst n8nRole = samlGroups\n  .map(g => roleMapping[g])\n  .find(r => r) || 'member';\n```\n\n**5. Just-In-Time Provisioning**\n- Enable auto-creation of users on first login\n- Map SAML attributes to n8n user fields\n- Auto-assign to default workspace\n\n**6. Session Management**\n```bash\n# Session configuration\nN8N_USER_MANAGEMENT_JWT_SECRET=your-secure-secret\nN8N_USER_MANAGEMENT_JWT_DURATION_HOURS=8\n```", "metadata": {"topic": "enterprise", "pattern": "sso_saml", "complexity": "enterprise"}}
{"instruction": "How do I implement enterprise security compliance in n8n?", "output": "**Enterprise Security Compliance for n8n**\n\n**1. Encryption at Rest**\n```bash\n# Database encryption\nPOSTGRES_SSL_MODE=require\nN8N_ENCRYPTION_KEY=your-32-character-encryption-key\n```\n\n**2. Encryption in Transit**\n```nginx\n# Force HTTPS\nserver {\n    listen 443 ssl;\n    ssl_certificate /etc/ssl/certs/n8n.crt;\n    ssl_certificate_key /etc/ssl/private/n8n.key;\n    ssl_protocols TLSv1.2 TLSv1.3;\n}\n```\n\n**3. Audit Logging**\n```javascript\n// Custom audit logging sub-workflow\n// Called at start/end of sensitive workflows\nconst auditLog = {\n  timestamp: new Date().toISOString(),\n  workflowId: $workflow.id,\n  workflowName: $workflow.name,\n  executionId: $execution.id,\n  userId: $execution.customData?.userId,\n  action: 'workflow_executed',\n  ipAddress: $input.first().json.headers['x-forwarded-for'],\n  userAgent: $input.first().json.headers['user-agent']\n};\n\nawait sendToSIEM(auditLog);\n```\n\n**4. Data Masking**\n```javascript\n// Mask sensitive data in logs/outputs\nfunction maskSensitive(data) {\n  const sensitiveFields = ['ssn', 'creditCard', 'password', 'apiKey'];\n  const masked = { ...data };\n  \n  for (const field of sensitiveFields) {\n    if (masked[field]) {\n      masked[field] = '***MASKED***';\n    }\n  }\n  return masked;\n}\n```\n\n**5. Access Control**\n- Enable RBAC in n8n settings\n- Create role-based workflow folders\n- Use credential sharing with minimal scope\n\n**6. Compliance Checklist**\n| Standard | n8n Configuration |\n|----------|-------------------|\n| SOC 2 | Audit logs, encryption, access control |\n| HIPAA | Data encryption, audit trails, BAA with hosting |\n| GDPR | Data retention policies, anonymization |\n| PCI-DSS | Network segmentation, encryption, logging |", "metadata": {"topic": "enterprise", "pattern": "security_compliance", "complexity": "enterprise"}}