Reinforcement Learning
Transformers
PyTorch
English
brain-inspired
spiking-neural-network
biologically-plausible
modular-architecture
vision-language
curriculum-learning
cognitive-architecture
artificial-general-intelligence
Eval Results (legacy)
Instructions to use Almusawee/ModularBrainAgent with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Almusawee/ModularBrainAgent with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Almusawee/ModularBrainAgent", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Rename file_00000000a28062469687ed81677ca448.png to SynCo_architecture.png
Browse files
.gitattributes
CHANGED
|
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
file_00000000a28062469687ed81677ca448.png filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
file_00000000a28062469687ed81677ca448.png filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
SynCo_architecture.png filter=lfs diff=lfs merge=lfs -text
|
file_00000000a28062469687ed81677ca448.png → SynCo_architecture.png
RENAMED
|
File without changes
|