turtle170 commited on
Commit
f6f0755
·
verified ·
1 Parent(s): febbf65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -3
README.md CHANGED
@@ -1,3 +1,62 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ library_name: jax
5
+ tags:
6
+ - deep-reinforcement-learning
7
+ - resnet
8
+ - multi-agent-systems
9
+ - tpu
10
+ - swarm-intelligence
11
+ datasets:
12
+ - competitive-foraging-sim
13
+ metrics:
14
+ - multi-agent-survival
15
+ - resnet-efficiency
16
+ ---
17
+
18
+ # Large-DeepRL (ResNet Edition)
19
+
20
+ **Large-DeepRL** is a high-capacity, multi-agent reinforcement learning model. It represents the "Predator" tier of the DeepRL evolution series, utilizing a **Residual Network (ResNet)** architecture to navigate a 128x128 high-resolution arena.
21
+
22
+ ## 📊 Model Profile
23
+ | Feature | Specification |
24
+ | :--- | :--- |
25
+ | **Architecture** | Deep ResNet (Residual Skip Connections) |
26
+ | **Grid Resolution** | 128x128 (16,384 spatial cells) |
27
+ | **Parameters** | ~185,000 (~740 KiB) |
28
+ | **Agents** | 10 Competing Seeds per Environment |
29
+ | **Input Channels** | 8 (Life, Food, Lava, 5x Signaling/Memory) |
30
+ | **Training Steps** | Overnight Evolution (Gen 50k+) |
31
+ | **Compute** | 16x Google Cloud TPU v5e (TRC Program) |
32
+
33
+ ## 🧬 Architectural Breakthroughs
34
+ This model moves beyond simple convolutions by implementing **Skip Connections**, allowing the gradient to flow through deeper layers without vanishing.
35
+
36
+ - **Global Spatial Reasoning:** The 128x128 grid provides 4x the territory of the Standard model, requiring the agent to plan long-distance paths.
37
+ - **Multi-Agent Competition:** Trained in a "scarcity" environment where 10 agents compete for limited food patches. This forces the emergence of aggressive, high-speed foraging behaviors.
38
+ - **8-Channel Alignment:** Optimized for TPU HBM alignment, ensuring maximum hardware utilization and zero memory padding bloat.
39
+
40
+ ## 🚀 Deployment (Inference)
41
+ While technically runnable on high-end CPUs, this model is specifically targeted for **Low-End GPUs** to maintain real-time performance.
42
+
43
+ ### Hardware Target: "GPU Tier"
44
+ - **Minimum GPU:** NVIDIA T4, RTX 3050, or equivalent.
45
+ - **Alternative:** High-end multi-core CPUs (AMD Ryzen 9 / Intel i9).
46
+ - **RAM:** 16GB minimum recommended.
47
+
48
+ ## 🛠️ Loading the DNA
49
+ The model is saved as a structured NumPy object array. Note the 8-channel input requirement when setting up your inference environment.
50
+
51
+ ```python
52
+ import numpy as np
53
+
54
+ # Load the Large-DeepRL DNA (Apache 2.0)
55
+ dna = np.load("Large-DeepRL.npy", allow_pickle=True)
56
+
57
+ # Architecture Structure:
58
+ # - Entry Convolution (64 filters)
59
+ # - ResNet Block 1 (Add + Activation)
60
+ # - ResNet Block 2 (Add + Activation)
61
+ # - 1x1 Strategy Head
62
+ # - 1x1 Decision Output