title: Email Classifier Agent
emoji: π€
colorFrom: blue
colorTo: indigo
sdk: docker
pinned: false
app_port: 7860
Read the full interactive version:
This repository is part of AI Agents From Scratch - a hands-on learning series where we build AI agents step by step, explain every design decision, and visualize whatΓ’β¬β’s happening under the hood.Γ°ΕΈββ° https://agentsfromscratch.com
If you prefer long-form explanations, diagrams, and conceptual deep dives, start there - then come back here to explore the code.
AI Agents From Scratch
Learn to build AI agents locally without frameworks. Understand what happens under the hood before using production frameworks.
Purpose
This repository teaches you to build AI agents from first principles using local LLMs and node-llama-cpp. By working through these examples, you'll understand:
- How LLMs work at a fundamental level
- What agents really are (LLM + tools + patterns)
- How different agent architectures function
- Why frameworks make certain design choices
Philosophy: Learn by building. Understand deeply, then use frameworks wisely.
Related Projects
AI Product from Scratch
Learn AI product development fundamentals with local LLMs. Covers prompt engineering, structured output, multi-step reasoning, API design, and frontend integration through 10 comprehensive lessons with visual diagrams.
AI Agents from Scratch in Python
Next Phase: Build LangChain & LangGraph Concepts From Scratch
After mastering the fundamentals, the next stage of this project walks you through re-implementing the core parts of LangChain and LangGraph in plain JavaScript using local models. This is not about building a new framework, itΓ’β¬β’s about understanding how frameworks work.
Phase 1: Agent Fundamentals - From LLMs to ReAct
Prerequisites
- Node.js 18+
- At least 8GB RAM (16GB recommended)
- Download models and place in
./models/folder, details in DOWNLOAD.md
Installation
npm install
Run Examples
node intro/intro.js
node simple-agent/simple-agent.js
node react-agent/react-agent.js
Learning Path
Follow these examples in order to build understanding progressively:
1. Introduction - Basic LLM Interaction
intro/ | Code | Code Explanation | Concepts
What you'll learn:
- Loading and running a local LLM
- Basic prompt/response cycle
Key concepts: Model loading, context, inference pipeline, token generation
2. (Optional) OpenAI Intro - Using Proprietary Models
openai-intro/ | Code | Code Explanation | Concepts
What you'll learn:
- How to call hosted LLMs (like GPT-4)
- Temperature Control
- Token Usage
Key concepts: Inference endpoints, network latency, cost vs control, data privacy, vendor dependence
3. Translation - System Prompts & Specialization
translation/ | Code | Code Explanation | Concepts
What you'll learn:
- Using system prompts to specialize agents
- Output format control
- Role-based behavior
- Chat wrappers for different models
Key concepts: System prompts, agent specialization, behavioral constraints, prompt engineering
4. Think - Reasoning & Problem Solving
think/ | Code | Code Explanation | Concepts
What you'll learn:
- Configuring LLMs for logical reasoning
- Complex quantitative problems
- Limitations of pure LLM reasoning
- When to use external tools
Key concepts: Reasoning agents, problem decomposition, cognitive tasks, reasoning limitations
5. Batch - Parallel Processing
batch/ | Code | Code Explanation | Concepts
What you'll learn:
- Processing multiple requests concurrently
- Context sequences for parallelism
- GPU batch processing
- Performance optimization
Key concepts: Parallel execution, sequences, batch size, throughput optimization
6. Coding - Streaming & Response Control
coding/ | Code | Code Explanation | Concepts
What you'll learn:
- Real-time streaming responses
- Token limits and budget management
- Progressive output display
- User experience optimization
Key concepts: Streaming, token-by-token generation, response control, real-time feedback
7. Simple Agent - Function Calling (Tools)
simple-agent/ | Code | Code Explanation | Concepts
What you'll learn:
- Function calling / tool use fundamentals
- Defining tools the LLM can use
- JSON Schema for parameters
- How LLMs decide when to use tools
Key concepts: Function calling, tool definitions, agent decision making, action-taking
This is where text generation becomes agency!
8. Simple Agent with Memory - Persistent State
simple-agent-with-memory/ | Code | Code Explanation | Concepts
What you'll learn:
- Persisting information across sessions
- Long-term memory management
- Facts and preferences storage
- Memory retrieval strategies
Key concepts: Persistent memory, state management, memory systems, context augmentation
9. ReAct Agent - Reasoning + Acting
react-agent/ | Code | Code Explanation | Concepts
What you'll learn:
- ReAct pattern (Reason Γ’β β Act Γ’β β Observe)
- Iterative problem solving
- Step-by-step tool use
- Self-correction loops
Key concepts: ReAct pattern, iterative reasoning, observation-action cycles, multi-step agents
This is the foundation of modern agent frameworks!
10. AoT Agent - Atom of Thought Planning
aot-agent/ | Code | Code Explanation | Concepts
What you'll learn:
- Atom of Thought methodology
- Atomic planning for multi-step computations
- Dependency management between operations
- Structured JSON output for reasoning plans
- Deterministic execution of plans
Key concepts: AoT planning, atomic operations, dependency resolution, plan validation, structured reasoning
Documentation Structure
Each example folder contains:
<name>.js- The working code exampleCODE.md- Step-by-step code explanation- Line-by-line breakdowns
- What each part does
- How it works
CONCEPT.md- High-level concepts- Why it matters for agents
- Architectural patterns
- Real-world applications
- Simple diagrams
Core Concepts
What is an AI Agent?
AI Agent = LLM + System Prompt + Tools + Memory + Reasoning Pattern
Γ’ββ¬Γ’β¬Òββ¬ Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’β¬Òββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬ Γ’ββ¬Γ’ββ¬Γ’β¬Òββ¬Γ’ββ¬ Γ’ββ¬Γ’ββ¬Γ’β¬Òββ¬Γ’ββ¬Γ’ββ¬ Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’β¬Òββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬
Γ’ββ Γ’ββ Γ’ββ Γ’ββ Γ’ββ
Brain Identity Hands State Strategy
Evolution of Capabilities
1. intro Γ’β β Basic LLM usage
2. translation Γ’β β Specialized behavior (system prompts)
3. think Γ’β β Reasoning ability
4. batch Γ’β β Parallel processing
5. coding Γ’β β Streaming & control
6. simple-agent Γ’β β Tool use (function calling)
7. memory-agent Γ’β β Persistent state
8. react-agent Γ’β β Strategic reasoning + tool use
Architecture Patterns
Simple Agent (Steps 1-5)
User Γ’β β LLM Γ’β β Response
Tool-Using Agent (Step 6)
User Γ’β β LLM Γ’ΕΈΒ· Tools Γ’β β Response
Memory Agent (Step 7)
User Γ’β β LLM Γ’ΕΈΒ· Tools Γ’β β Response
Γ’β β’
Memory
ReAct Agent (Step 8)
User Γ’β β LLM Γ’β β Think Γ’β β Act Γ’β β Observe
Γ’β β Γ’β β Γ’β β Γ’β β
Γ’ββΓ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’β´Òββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’β´Òββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’ββ¬Γ’βΛ
Iterate until solved
Γ―ΒΈΒ Helper Utilities
PromptDebugger
helper/prompt-debugger.js
Utility for debugging prompts sent to the LLM. Shows exactly what the model sees, including:
- System prompts
- Function definitions
- Conversation history
- Context state
Usage example in simple-agent/simple-agent.js
Γ―ΒΈΒ Project Structure - Fundamentals
ai-agents/
Γ’βΕΓ’ββ¬Γ’ββ¬ README.md Γ’β Β You are here
Γ’βΕΓ’ββ¬ examples/
Γ’βΕΓ’ββ¬Γ’ββ¬ 01_intro/
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ intro.js
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ CODE.md
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ CONCEPT.md
Γ’βΕΓ’ββ¬Γ’ββ¬ 02_openai-intro/
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ openai-intro.js
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ CODE.md
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ CONCEPT.md
Γ’βΕΓ’ββ¬Γ’ββ¬ 03_translation/
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ translation.js
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ CODE.md
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ CONCEPT.md
Γ’βΕΓ’ββ¬Γ’ββ¬ 04_think/
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ think.js
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ CODE.md
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ CONCEPT.md
Γ’βΕΓ’ββ¬Γ’ββ¬ 05_batch/
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ batch.js
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ CODE.md
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ CONCEPT.md
Γ’βΕΓ’ββ¬Γ’ββ¬ 06_coding/
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ coding.js
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ CODE.md
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ CONCEPT.md
Γ’βΕΓ’ββ¬Γ’ββ¬ 07_simple-agent/
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ simple-agent.js
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ CODE.md
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ CONCEPT.md
Γ’βΕΓ’ββ¬Γ’ββ¬ 08_simple-agent-with-memory/
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ simple-agent-with-memory.js
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ memory-manager.js
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ CODE.md
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ CONCEPT.md
Γ’βΕΓ’ββ¬Γ’ββ¬ 09_react-agent/
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ react-agent.js
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ CODE.md
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ CONCEPT.md
Γ’βΕΓ’ββ¬Γ’ββ¬ helper/
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ prompt-debugger.js
Γ’βΕΓ’ββ¬Γ’ββ¬ models/ Γ’β Β Place your GGUF models here
Γ’ββΓ’ββ¬Γ’ββ¬ logs/ Γ’β Β Debug outputs
Phase 2: Building a Production Framework (Tutorial)
After mastering the fundamentals above, Phase 2 takes you from scratch examples to production-grade framework design. You'll rebuild core concepts from LangChain and LangGraph to understand how real frameworks work internally.
What You'll Build
A lightweight but complete agent framework with:
- Runnable Interface, The composability pattern that powers everything
- Message System, Typed conversation structures (Human, AI, System, Tool)
- Chains, Composing multiple operations into pipelines
- Memory, Persistent state across conversations
- Tools, Function calling and external integrations
- Agents, Decision-making loops (ReAct, Tool-calling)
- Graphs, State machines for complex workflows (LangGraph concepts)
Learning Approach
Tutorial-first: Step-by-step lessons with exercises
Implementation-driven: Build each component yourself
Framework-compatible: Learn patterns used in LangChain.js
Structure Overview
tutorial/
Γ’βΕΓ’ββ¬Γ’ββ¬ 01-foundation/ # 1. Core Abstractions
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 01-runnable/
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ lesson.md # Why Runnable matters
Γ’ββ Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ exercises/ # Hands-on practice
Γ’ββ Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ solutions/ # Reference implementations
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 02-messages/ # Structuring conversations
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 03-llm-wrapper/ # Wrapping node-llama-cpp
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ 04-context/ # Configuration & callbacks
Γ’ββ
Γ’βΕΓ’ββ¬Γ’ββ¬ 02-composition/ # 2. Building Chains
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 01-prompts/ # Template system
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 02-parsers/ # Structured outputs
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 03-llm-chain/ # Your first chain
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 04-piping/ # Composition patterns
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ 05-memory/ # Conversation state
Γ’ββ
Γ’βΕΓ’ββ¬Γ’ββ¬ 03-agency/ # 3. Tools & Agents
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 01-tools/ # Function definitions
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 02-tool-executor/ # Safe execution
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 03-simple-agent/ # Basic agent loop
Γ’ββ Γ’βΕΓ’ββ¬Γ’ββ¬ 04-react-agent/ # Reasoning + Acting
Γ’ββ Γ’ββΓ’ββ¬Γ’ββ¬ 05-structured-agent/ # JSON mode
Γ’ββ
Γ’ββΓ’ββ¬Γ’ββ¬ 04-graphs/ # 4. State Machines
Γ’βΕΓ’ββ¬Γ’ββ¬ 01-state-basics/ # Nodes & edges
Γ’βΕΓ’ββ¬Γ’ββ¬ 02-channels/ # State management
Γ’βΕΓ’ββ¬Γ’ββ¬ 03-conditional-edges/ # Dynamic routing
Γ’βΕΓ’ββ¬Γ’ββ¬ 04-executor/ # Running workflows
Γ’βΕΓ’ββ¬Γ’ββ¬ 05-checkpointing/ # Persistence
Γ’ββΓ’ββ¬Γ’ββ¬ 06-agent-graph/ # Agents as graphs
src/
Γ’βΕΓ’ββ¬Γ’ββ¬ core/ # Runnable, Messages, Context
Γ’βΕΓ’ββ¬Γ’ββ¬ llm/ # LlamaCppLLM wrapper
Γ’βΕΓ’ββ¬Γ’ββ¬ prompts/ # Template system
Γ’βΕΓ’ββ¬Γ’ββ¬ chains/ # LLMChain, SequentialChain
Γ’βΕΓ’ββ¬Γ’ββ¬ tools/ # BaseTool, built-in tools
Γ’βΕΓ’ββ¬Γ’ββ¬ agents/ # AgentExecutor, ReActAgent
Γ’βΕΓ’ββ¬Γ’ββ¬ memory/ # BufferMemory, WindowMemory
Γ’ββΓ’ββ¬Γ’ββ¬ graph/ # StateGraph, CompiledGraph
Why This Matters
Understanding beats using: When you know how frameworks work internally, you can:
- Debug issues faster
- Customize behavior confidently
- Make architectural decisions wisely
- Build your own extensions
- Read framework source code fluently
Learn once, use everywhere: The patterns you'll learn (Runnable, composition, state machines) apply to:
- LangChain.js - You'll understand their abstractions
- LangGraph.js - You'll grasp state management
- Any agent framework - Same core concepts
- Your own projects - Build custom solutions
Getting Started with Phase 2
After completing the fundamentals (intro Γ’β β react-agent), start the tutorial:
# Start with the foundation
cd tutorial/01-foundation/01-runnable
lesson.md # Read the lesson
node exercises/01-*.js # Complete exercises
node solutions/01-*-solution.js # Check your work
Each lesson includes:
- Conceptual explanation, Why it matters
- Code walkthrough, How to build it
- Exercises, Practice implementing
- Solutions, Reference code
- Real-world examples, Practical usage
Time commitment: ~8 weeks, 3-5 hours/week
What You'll Achieve
By the end, you'll have:
- Built a working agent framework from scratch
- Understood how LangChain/LangGraph work internally
- Mastered composability patterns
- Created reusable components (tools, chains, agents)
- Implemented state machines for complex workflows
- Gained confidence to use or extend any framework
Then: Use LangChain.js in production, knowing exactly what happens under the hood.
Key Takeaways
After Phase 1 (Fundamentals), you'll understand:
- LLMs are stateless: Context must be managed explicitly
- System prompts shape behavior: Same model, different roles
- Function calling enables agency: Tools transform text generators into agents
- Memory is essential: Agents need to remember across sessions
- Reasoning patterns matter: ReAct > simple prompting for complex tasks
- Performance matters: Parallel processing, streaming, token limits
- Debugging is crucial: See exactly what the model receives
After Phase 2 (Framework Tutorial), you'll master:
- The Runnable pattern: Why everything in frameworks uses one interface
- Composition over configuration: Building complex systems from simple parts
- Message-driven architecture: How frameworks structure conversations
- Chain abstraction: Connecting prompts, LLMs, and parsers seamlessly
- Tool orchestration: Safe execution with timeouts and error handling
- Agent execution loops: The mechanics of decision-making agents
- State machines: Managing complex workflows with graphs
- Production patterns: Error handling, retries, streaming, and debugging
What frameworks give you:
Now that you understand the fundamentals, frameworks like LangChain, CrewAI, or AutoGPT provide:
- Pre-built reasoning patterns and agent templates
- Extensive tool libraries and integrations
- Production-ready error handling and retries
- Multi-agent orchestration
- Observability and monitoring
- Community extensions and plugins
You'll use them better because you know what they're doing under the hood.
Additional Resources
- node-llama-cpp: GitHub
- Model Hub: Hugging Face
- GGUF Format: Quantized models for local inference
Contributing
This is a learning resource. Feel free to:
- Suggest improvements to documentation
- Add more example patterns
- Fix bugs or unclear explanations
- Share what you built!
License
Educational resource - use and modify as needed for learning.
Built with Γ’Β€ï¸ for people who want to truly understand AI agents
Start with intro/ and work your way through. Each example builds on the previous one. Read both CODE.md and CONCEPT.md for full understanding.
Happy learning!