File size: 2,409 Bytes
ee242c7 28d0f1b ee242c7 28d0f1b ee242c7 28d0f1b 8231c96 28d0f1b 04380e0 ee242c7 28d0f1b ee242c7 28d0f1b ee242c7 28d0f1b ee242c7 28d0f1b ee242c7 28d0f1b ee242c7 28d0f1b ee242c7 28d0f1b ee242c7 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 | ---
language: en
tags:
- gpt2
- echo-self
- cognitive-architecture
- deep-tree-echo
license: mit
---
# EchoSelf NanEcho Model
## Model Description
This is a **Deep Tree Echo** cognitive architecture model trained using the EchoSelf framework.
The model implements adaptive attention mechanisms, persona dimensions, and recursive reasoning
capabilities inspired by cognitive science and AGI research.
## Model Architecture
- **Base Architecture**: GPT-2
- **Parameters**: 4 layers, 256 embedding dimensions
- **Vocabulary Size**: 50257
- **Context Length**: N/A tokens
## Training Details
- **Checkpoint ID**: ckpt_20260425_135103_18000_22deff1b_9470fbb7
- **Training Iteration**: 18000
- **Validation Loss**: 0.00032569289276580094
- **Quality Score**: 2764800.7811699593
## Echo Self Features
This model incorporates several cognitive architecture features:
- **Adaptive Attention**: Dynamic threshold adjustment based on cognitive load
- **Persona Dimensions**: Multi-dimensional cognitive processing
- Cognitive, Introspective, Adaptive, Recursive
- Synergistic, Holographic, Neural-Symbolic, Dynamic
- **Recursive Reasoning**: Multi-level introspection capabilities
- **Hypergraph Patterns**: Neural-symbolic pattern encoding
## Usage
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load model and tokenizer
model = GPT2LMHeadModel.from_pretrained("9cog/echoself-nanecho")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# Generate text
inputs = tokenizer("Echo Self is", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0]))
```
## Training Data
The model was trained on:
- Echo Self documentation and cognitive architecture descriptions
- Hypergraph reasoning patterns
- Persona dimension examples
- Recursive introspection samples
## Limitations
This is a research model exploring cognitive architectures. It should not be used for:
- Production applications without further validation
- Tasks requiring factual accuracy
- Critical decision-making systems
## Citation
```bibtex
@misc{echoself-nanecho,
title={EchoSelf NanEcho: Deep Tree Echo Cognitive Architecture},
author={9cog},
year={2026},
url={https://github.com/9cog/echoself}
}
```
## More Information
- **Repository**: https://github.com/9cog/echoself
- **Documentation**: See repository README for detailed architecture information
|