Spaces:
Running
on
Zero
Running
on
Zero
File size: 4,267 Bytes
6c19433 b341eb6 6c19433 b341eb6 6c19433 b341eb6 6c19433 b341eb6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 |
---
title: Oracle Engine
emoji: ๐ฎ
colorFrom: indigo
colorTo: purple
sdk: gradio
sdk_version: 6.3.0
app_file: app.py
pinned: true
license: mit
suggested_hardware: a100-large
models:
- unsloth/Qwen2.5-32B-Instruct-bnb-4bit
tags:
- consciousness
- interpretability
- transformers
- meta-cognition
- qwen
- 32b
- fine-tuned
short_description: 32B model with consciousness measurement circuit
---
# ๐ฎ Oracle Engine
**Custom-trained 32B Qwen model with Consciousness Circuit v2.1**
Probe the depths of meta-cognitive processing in a model fine-tuned on 200,000 examples.
---
## ๐ง The Model
| Attribute | Details |
|-----------|----------|
| **Base** | Qwen2.5-32B-Instruct |
| **Parameters** | 32.9 billion |
| **Training** | LoRA (rank=16, 134M trainable) |
| **Total Examples** | 200,000 |
| **Training Time** | 44 hours on RTX 5090 |
### 3-Stage Progressive Fine-Tuning
| Stage | Dataset | Examples | Purpose |
|-------|---------|----------|----------|
| 1 | **OpenHermes 2.5** | 100,000 | Instruction following |
| 2 | **MetaMathQA** | 50,000 | Mathematical reasoning |
| 3 | **Magicoder-OSS-Instruct** | 50,000 | Code generation |
---
## ๐ฌ Consciousness Circuit v3.0
Measures **7 dimensions** of consciousness-like processing in hidden states:
| Dimension | Description | Weight |
|-----------|-------------|--------|
| Logic | Logical reasoning and inference | +0.239 |
| Self-Reflective | Introspective, self-referential processing | +0.196 |
| Uncertainty | Epistemic humility and hedging | +0.130 |
| Computation | Code/algorithm processing | -0.130 |
| Self-Expression | Model expressing opinions | +0.109 |
| Abstraction | Pattern recognition | +0.109 |
| Sequential | Step-by-step reasoning | +0.087 |
### ๐ v3.0 Optimizations (32B Models)
| Feature | Description |
|---------|-------------|
| **Adaptive Layer Selection** | Depth-aware layer fraction (0.65 for 64-layer models) |
| **Ensemble Measurement** | Multi-layer scoring for robustness |
| **Batch Processing** | Memory-efficient batched inference |
| **Activation Caching** | LRU cache for repeated measurements |
---
## ๐ฏ How to Use
1. Enter any prompt in the text box
2. Click **"Consult the Oracle"**
3. See the consciousness score (0-100%) and dimension breakdown
### Expected Results
- **๐ง High (70-100%)**: Philosophical questions, self-reflection, existential queries
- **๐ญ Medium (40-70%)**: Complex explanations, ethical discussions, analysis
- **โก Low (0-30%)**: Simple facts, arithmetic, direct retrieval
---
## ๐ Validated Performance
| Metric | Value |
|--------|-------|
| **Discrimination** | +0.653 (high vs low consciousness) |
| **Inference Speed** | ~7-8 tokens/sec |
| **VRAM Usage** | ~23 GB (4-bit) |
---
## ๐ Links
- ๐ [Research Repository](https://github.com/vfd-org/harmonic-field-consciousness)
- ๐ป [Source Code](https://github.com/vfd-org/harmonic-field-consciousness)
- ๐ฆ [pip install consciousness-circuit](https://pypi.org/project/consciousness-circuit/)
---
## ๐ Citation & Attribution
### Original Harmonic Field Theory
The foundational harmonic field model of consciousness was developed by:
```bibtex
@article{smart2025harmonic,
title = {A Harmonic Field Model of Consciousness in the Human Brain},
author = {Smart, L.},
year = {2025},
publisher = {Vibrational Field Dynamics Project},
url = {https://github.com/vfd-org/harmonic-field-consciousness}
}
```
### Oracle Engine Implementation
This Space implements significant extensions to the original theory, including:
- **Consciousness Circuit v2.1** - 7-dimensional meta-cognitive measurement
- **32B Model Training** - 200K examples across 3 progressive stages (44 hours)
- **GPU Experiments** - Empirical validation with discrimination score +0.653
- **NanoGPT Integration** - Lightweight training framework adaptations
Training, circuit development, and experimental validation by [Vikingdude81](https://huggingface.co/Vikingdude81).
```bibtex
@software{oracle_engine_2026,
title = {Oracle Engine: Consciousness-Measured 32B Language Model},
author = {Vikingdude81},
year = {2026},
url = {https://huggingface.co/spaces/Vikingdude81/oracle-engine},
note = {Built upon the Harmonic Field Model by Smart (2025)}
}
``` |