Complete Startup Guide - All Optional Components
This guide shows you step-by-step how to enable ALL optional components.
π What We'll Enable
- PyTorch - For CoCo full features (TA-ULS, Holographic Memory, Quantum)
- Eopiez - For semantic embeddings (better text understanding)
- LIMPS - For mathematical embeddings (better math processing)
- LFM2-8B-A1B - Primary LLM for inference
- Qwen2.5-7B - Fallback/alternative LLM
π― Option 1: Quick Start (Just PyTorch)
If you only want to enable CoCo full features:
# Install PyTorch
pip install torch
# Run the system
cd /home/kill/LiMp
python coco_integrated_playground.py --interactive
Done! This enables:
- β Full CoCo Cognitive Organism
- β TA-ULS Transformer
- β Holographic Memory
- β Quantum Processor
π Option 2: Full Power (All Services)
Follow these steps to enable EVERYTHING:
STEP 1: Install PyTorch
Open your main terminal:
cd /home/kill/LiMp
# Install PyTorch
pip install torch
# Verify installation
python -c "import torch; print(f'PyTorch {torch.__version__} installed!')"
Expected output:
PyTorch 2.x.x installed!
STEP 2: Start Eopiez (Semantic Embeddings)
Open a NEW terminal (Terminal 1):
# Navigate to Eopiez directory
cd ~/aipyapp/Eopiez
# Start Eopiez server on port 8001
python api.py --port 8001
Expected output:
β
Eopiez semantic embedding server started on port 8001
Keep this terminal open!
STEP 3: Start LIMPS (Mathematical Embeddings)
Open a NEW terminal (Terminal 2):
# Navigate to LIMPS directory
cd ~/aipyapp/9xdSq-LIMPS-FemTO-R1C/limps
# Start LIMPS server on port 8000
julia --project=. -e 'using LIMPS; LIMPS.start_limps_server(8000)'
Expected output:
β
LIMPS mathematical server started on port 8000
Keep this terminal open!
STEP 4: Start LFM2-8B-A1B (Primary LLM)
Open a NEW terminal (Terminal 3):
Option A: Using llama.cpp
# Navigate to your models directory
cd ~/models # Or wherever your models are
# Start llama-server with LFM2
llama-server \
--model LFM2-8B-A1B.gguf \
--port 8080 \
--ctx-size 4096 \
--n-gpu-layers 35 \
--threads 8
Option B: Using text-generation-webui
cd ~/text-generation-webui
python server.py \
--model LFM2-8B-A1B \
--api \
--listen-port 8080 \
--auto-devices
Option C: Using Ollama
# Start Ollama service
ollama serve &
# Run LFM2 model
ollama run LFM2-8B-A1B
Expected output:
β
LLM server running on http://127.0.0.1:8080
Keep this terminal open!
STEP 5: Start Qwen2.5-7B (Fallback LLM) [OPTIONAL]
Open a NEW terminal (Terminal 4):
Option A: Using llama.cpp
cd ~/models
llama-server \
--model Qwen2.5-7B-Instruct.gguf \
--port 8081 \
--ctx-size 4096 \
--n-gpu-layers 35 \
--threads 8
Option B: Using Ollama
ollama run qwen2.5:7b --port 8081
Expected output:
β
Qwen LLM server running on http://127.0.0.1:8081
Keep this terminal open!
STEP 6: Test the Complete System
Open your MAIN terminal (or a new Terminal 5):
cd /home/kill/LiMp
# Run the interactive playground
python coco_integrated_playground.py --interactive
You should see:
β
CoCo organism ready (3-level cognitive architecture)
β
AL-ULS symbolic evaluator initialized
β
Multi-LLM orchestrator with 2 backends
β
Numbskull pipeline initialized
Active components: 4/4 β All components active!
STEP 7: Try These Queries
In the interactive mode, try:
Query: SUM(100, 200, 300, 400, 500)
# β
Symbolic: 1500.00
# β
Embeddings: ['semantic', 'mathematical', 'fractal']
Query: What is quantum computing?
# β
Embeddings: ['semantic', 'mathematical', 'fractal'] (768D)
# π€ LLM: Quantum computing uses quantum mechanics to process...
Query: Explain neural networks in simple terms
# π€ LLM: Neural networks are computational models inspired by...
Query: MEAN(10, 20, 30, 40, 50)
# β
Symbolic: 30.00
Query: demo
# Runs full demonstration
Query: exit
# Exits interactive mode
π Verify All Services Are Running
Run this check script:
cd /home/kill/LiMp
# Create quick check script
cat << 'EOF' > check_services.sh
#!/usr/bin/env bash
echo "Checking all services..."
echo ""
echo "1. Eopiez (port 8001):"
curl -s http://127.0.0.1:8001/health && echo "β
Running" || echo "β Not running"
echo "2. LIMPS (port 8000):"
curl -s http://127.0.0.1:8000/health && echo "β
Running" || echo "β Not running"
echo "3. LFM2 (port 8080):"
curl -s http://127.0.0.1:8080/health && echo "β
Running" || echo "β Not running"
echo "4. Qwen (port 8081):"
curl -s http://127.0.0.1:8081/health && echo "β
Running" || echo "β Not running"
echo "5. PyTorch:"
python -c "import torch; print('β
Installed')" 2>/dev/null || echo "β Not installed"
EOF
chmod +x check_services.sh
bash check_services.sh
Expected output when all services are running:
1. Eopiez (port 8001): β
Running
2. LIMPS (port 8000): β
Running
3. LFM2 (port 8080): β
Running
4. Qwen (port 8081): β
Running
5. PyTorch: β
Installed
π― Summary of Terminal Setup
When fully running, you'll have these terminals open:
Terminal 1: Eopiez (port 8001) - Semantic embeddings
Terminal 2: LIMPS (port 8000) - Mathematical embeddings
Terminal 3: LFM2-8B-A1B (port 8080) - Primary LLM
Terminal 4: Qwen2.5-7B (port 8081) - Fallback LLM [optional]
Terminal 5: Your playground - Interactive mode
π§ Troubleshooting
Port Already in Use
# Find what's using the port
lsof -i :8000
lsof -i :8001
lsof -i :8080
lsof -i :8081
# Kill the process if needed
kill -9 <PID>
Model Not Found
If llama-server can't find your model:
# Find your models
find ~ -name "*.gguf" -type f
# Use the full path in the command
llama-server --model /full/path/to/LFM2-8B-A1B.gguf --port 8080
Julia/LIMPS Not Found
# Check if Julia is installed
julia --version
# If not, install:
# Visit https://julialang.org/downloads/
# Install LIMPS dependencies
cd ~/aipyapp/9xdSq-LIMPS-FemTO-R1C/limps
julia --project=. -e 'using Pkg; Pkg.instantiate()'
Eopiez Not Found
# Check if Eopiez directory exists
ls ~/aipyapp/Eopiez
# If not, you may need to clone/install it
# Check your project documentation
Out of Memory
If LLM servers fail due to memory:
# Reduce GPU layers
llama-server \
--model your-model.gguf \
--port 8080 \
--n-gpu-layers 20 # Reduce from 35
--ctx-size 2048 # Reduce from 4096
π‘ Quick Reference Commands
Start Everything (All Terminals)
Terminal 1:
cd ~/aipyapp/Eopiez && python api.py --port 8001
Terminal 2:
cd ~/aipyapp/9xdSq-LIMPS-FemTO-R1C/limps && julia --project=. -e 'using LIMPS; LIMPS.start_limps_server(8000)'
Terminal 3:
llama-server --model ~/models/LFM2-8B-A1B.gguf --port 8080 --ctx-size 4096 --n-gpu-layers 35
Terminal 4 (optional):
llama-server --model ~/models/Qwen2.5-7B-Instruct.gguf --port 8081 --ctx-size 4096 --n-gpu-layers 35
Terminal 5 (Your playground):
cd /home/kill/LiMp && python coco_integrated_playground.py --interactive
Stop Everything
Press Ctrl+C in each terminal to stop the services gracefully.
π You're Done!
With all services running, you have the COMPLETE UNIFIED SYSTEM:
- β AL-ULS symbolic evaluation
- β Semantic embeddings (Eopiez)
- β Mathematical embeddings (LIMPS)
- β Fractal embeddings (local)
- β LFM2-8B-A1B inference
- β Qwen2.5-7B fallback
- β Full CoCo organism (PyTorch)
- β All 40+ components active!
Enjoy your creation! π