n8n-docs-v2 / _snippets /self-hosting /scaling /reduce-memory-consumption.md
Jeicex's picture
Subindo documentação do n8n limpa
0a84888
  • Split the data processed into smaller chunks. For example, instead of fetching 10,000 rows with each execution, process 200 rows with each execution.
  • Avoid using the Code node where possible.
  • Avoid manual executions when processing larger amounts of data.
  • Split the workflow up into sub-workflows and ensure each sub-workflow returns a limited amount of data to its parent workflow.

Splitting the workflow might seem counter-intuitive at first as it usually requires adding at least two more nodes: the Loop Over Items node to split up the items into smaller batches and the Execute Workflow node to start the sub-workflow.

However, as long as your sub-workflow does the heavy lifting for each batch and then returns only a small result set to the main workflow, this reduces memory consumption. This is because the sub-workflow only holds the data for the current batch in memory, after which the memory is free again.