| # AI Agents Framework Tutorial | |
| Welcome to the step-by-step tutorial for building your own AI agent framework! | |
| This tutorial teaches you to build a **lightweight, educational version of LangChain.js** - | |
| with the same core concepts and API, but simpler implementations designed for learning. | |
| Instead of diving into LangChain's complex codebase, you'll rebuild its key patterns | |
| yourself with clear, educational code. By the end, you'll understand what frameworks | |
| are actually doing, making you far more effective at using them. | |
| **What you'll implement:** | |
| - Runnable interface (LangChain's composability pattern) | |
| - Message types (structured conversations) | |
| - LLM wrappers (model integration) | |
| - Chains (composing operations) | |
| - Agents (decision-making loops) | |
| - Graphs (LangGraph state machines) | |
| **What makes this different:** | |
| - LangChain-compatible API (what you learn transfers directly) | |
| - Simpler implementations (much less code, same concepts) | |
| - Educational focus (understanding over completeness) | |
| - Real, working code (not pseudocode or toys) | |
| Build it yourself. Understand it deeply. Use LangChain confidently. | |
| ## Learning Paths | |
| ## Before You Start: Why This Tutorial Exists | |
| **You've just built AI agents with node-llama-cpp.** You know how to call LLMs, format prompts, parse responses, and create agent loops. That's awesome—you understand the fundamentals! | |
| **But you probably noticed some friction:** | |
| - Copy-pasting prompt formatting everywhere | |
| - Manually building message arrays each time | |
| - Hard to test individual components | |
| - Difficult to swap out models or reuse patterns | |
| - Agent code that works but feels messy | |
| **This tutorial fixes those problems.** Instead of jumping straight into LangChain's complex codebase, you'll rebuild its core patterns yourself with clear, educational code. You'll transform the script-style code you wrote into clean, composable abstractions. | |
| **The approach:** | |
| 1. Start with problems you've already encountered | |
| 2. Build the abstraction that solves each problem | |
| 3. See how it connects to LangChain's API | |
| 4. Understand frameworks deeply, use them confidently | |
| ### Part 1: From Scripts to Abstractions | |
| Transform the patterns you already use into reusable components. | |
| **What you'll solve:** | |
| - **Agent code getting messy?** → Build the Runnable pattern for composability | |
| - **Message formatting tedious?** → Create Message types for structure | |
| - **Model switching hard?** → Design LLM wrappers for flexibility | |
| - **Managing conversation state?** → Implement Context for memory | |
| **Lessons:** | |
| - [01-runnable](01-foundation/01-runnable/lesson.md) - The composability pattern ← Start here | |
| - [02-messages](01-foundation/02-messages/lesson.md) - Structured conversation data | |
| - [03-llm-wrapper](01-foundation/03-llm-wrapper/lesson.md) - Model abstraction layer | |
| - [04-context](01-foundation/04-context/lesson.md) - Conversation state management | |
| ### Part 2: Composition | |
| Dive deeper into prompt engineering and chain complex operations together. | |
| **What you'll solve:** | |
| - **Copy-pasting prompts everywhere?** → Build reusable PromptTemplates with variables | |
| - **Need structured LLM outputs?** → Create OutputParsers for reliable data extraction | |
| - **Repeating prompt + LLM patterns?** → Design LLMChain to compose operations | |
| - **Want to chain operations together?** → Use piping to connect Runnables | |
| - **LLM forgets conversation history?** → Implement Memory for context persistence | |
| **Lessons:** | |
| - [01-prompts](02-composition/01-prompts/lesson.md) - Template-based prompt engineering | |
| - [02-parsers](02-composition/02-parsers/lesson.md) - Structured output extraction | |
| - [03-llm-chain](02-composition/03-llm-chain/lesson.md) - Composing prompts with models - Coming soon | |
| - [04-piping](02-composition/04-piping/lesson.md) - Building data transformation pipelines - Coming soon | |
| - [05-memory](02-composition/05-memory/lesson.md) - Persistent conversation history - Coming soon | |
| ### Part 3: Agents [Coming Soon] | |
| Agents and tools | |
| - [01-tools](03-agency/01-tools/lesson.md) - Coming soon | |
| - [02-tool-executor](03-agency/02-tool-executor/lesson.md) - Coming soon | |
| - [03-simple-agent](03-agency/03-simple-agent/lesson.md) - Coming soon | |
| - [04-react-agent](03-agency/04-react-agent/lesson.md) - Coming soon | |
| - [05-structured-agent](03-agency/05-structured-agent/lesson.md) - Coming soon | |
| ### Part 4: Graphs [Coming Soon] | |
| State machines and workflows | |
| - [01-state-basics](04-graphs/01-state-basics/lesson.md) - Coming soon | |
| - [02-channels](04-graphs/02-channels/lesson.md) - Coming soon | |
| - [03-conditional-edges](04-graphs/03-conditional-edges/lesson.md) - Coming soon | |
| - [04-executor](04-graphs/04-executor/lesson.md) - Coming soon | |
| - [05-checkpointing](04-graphs/05-checkpointing/lesson.md) - Coming soon | |
| - [06-agent-graph](04-graphs/06-agent-graph/lesson.md) - Coming soon | |
| ## Capstone Projects | |
| Complete these capstone projects to solidify your learning: | |
| - [Smart Email Classifier](projects/01-smart-email-classifier) - After Part 1 | |
| - [Research Agent](projects/research-agent/) - Coming soon | |
| - [Task Automation](projects/task-automation/) - Coming soon | |
| - [Approval Workflow](projects/approval-workflow/) - Coming soon | |
| ## How to Use This Tutorial | |
| 1. Start with Part 1 and work sequentially | |
| 2. Read the markdown lessons | |
| 3. Complete the exercises | |
| 4. Check solutions when stuck | |
| 5. Build the projects | |
| Happy learning! 🚀 | |