File size: 3,624 Bytes
2e1d2fb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
---
license: mit
language:
  - en
tags:
  - lightbrain
  - field-dynamics
  - sparse-activation
  - text-generation
library_name: lightbrain
pipeline_tag: text-generation
model-index:
  - name: lightbrain-100m
    results: []
---

# lightbrain-100m

## Model Description

LIGHTBRAIN is a novel neural architecture based on **Hybrid Field Transformer** paradigm.

### Key Features

- **Sparse Activation**: Only ~0.1-10% of field regions active during inference
- **Field Dynamics**: Pattern resonance for knowledge retrieval
- **Transformer Integration**: Self-attention for sequence modeling (hybrid)
- **OpenAI-Compatible API**: Drop-in replacement for chat completions

## Architecture

| Component | Value |
|-----------|-------|
| Hidden Size | 768 |
| Layers | 12 |
| Attention Heads | 12 |
| Field Regions | 128 |
| Field Size | 128 |
| Field Depth | 64 |

```
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   TRANSFORMER ENCODER LAYERS        β”‚
β”‚   (Self-Attention + FFN)            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
        ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   FIELD DYNAMICS CORE               β”‚
β”‚   (Sparse Activation + Evolution)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
        ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   OUTPUT PROJECTION                 β”‚
β”‚   (Pattern β†’ Token Logits)          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
```

## Model Files

| File | Description |
|------|-------------|
| `Model-001.safetensors` | Model weights (721.30 MB) |
| `config.json` | Model configuration |
| `tokenizer.json` | Tokenizer vocabulary |
| `tokenizer_config.json` | Tokenizer configuration |
| `generation_config.json` | Generation parameters |
| `params.json` | LIGHTBRAIN parameters |

## Model Stats

- **Original Size**: 721.28 MB
- **File Size**: 721.30 MB  
- **Compression Ratio**: 1.00x
- **Number of Tensors**: 200

## Usage

### With LIGHTBRAIN Library

```python
from lightbrain.model import HybridFieldTransformer
from lightbrain.inference import InferenceEngine

# Load model
model = HybridFieldTransformer.load("path/to/model")
engine = InferenceEngine(model=model)

# Generate
result = engine.generate("Hello, how are you?")
print(result.text)
```

### Loading from Safetensors

```python
from safetensors.numpy import load_file
import json

# Load weights
weights = load_file("Model-001.safetensors")

# Load config
with open("config.json") as f:
    config = json.load(f)

# Reconstruct model from weights
```

### In Google Colab

```python
# Install
!pip install safetensors

# Download
from huggingface_hub import snapshot_download
model_path = snapshot_download(repo_id="lightbrain-100m")

# Load and use
from safetensors.numpy import load_file
weights = load_file(f"{model_path}/Model-001.safetensors")
```

## Training

Trained using LIGHTBRAIN framework with:
- Resonance Alignment (Hebbian learning)
- Gradient-based fine-tuning for transformer layers
- Field topology optimization

## License

MIT License

## Citation

```bibtex
@misc{lightbrain2024,
  title={LIGHTBRAIN: Hybrid Field Dynamics for Efficient LLMs},
  year={2024},
  publisher={HuggingFace}
}
```