Spaces:
Sleeping
Sleeping
Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,62 +1,10 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
-
|
| 10 |
-
# Futures Prediction Model (MoE + SSM + FiLM)
|
| 11 |
-
|
| 12 |
-
This repository contains the code and trained weights for a novel architecture designed for multi-dimensional futures prediction. The model was trained on the `futures_dataset_v2.json` dataset.
|
| 13 |
-
|
| 14 |
-
## Model Description
|
| 15 |
-
|
| 16 |
-
The model architecture is a combination of:
|
| 17 |
-
|
| 18 |
-
* **Mixture of Experts (MoE):** To handle the multi-dimensional nature of futures scenarios.
|
| 19 |
-
* **State Space Model (SSM):** To capture the temporal evolution of futures.
|
| 20 |
-
* **FiLM Conditioning:** To modulate the model's behavior based on the different future axes.
|
| 21 |
-
|
| 22 |
-
The model is trained to predict a 12-dimensional vector of weights, each corresponding to a different future "axis".
|
| 23 |
-
|
| 24 |
-
## How to Use
|
| 25 |
-
|
| 26 |
-
To use this model, you will need to have PyTorch installed. You can then use the `load_model.py` script to load the model and tokenizer.
|
| 27 |
-
|
| 28 |
-
```python
|
| 29 |
-
from load_model import load_model_and_tokenizer
|
| 30 |
-
|
| 31 |
-
model, tokenizer = load_model_and_tokenizer()
|
| 32 |
-
|
| 33 |
-
text = "In a future dominated by hyper-automation, societal structures adapt to new forms of labor and community."
|
| 34 |
-
token_ids = tokenizer.encode(text)
|
| 35 |
-
tokens_tensor = torch.LongTensor(token_ids).unsqueeze(0)
|
| 36 |
-
|
| 37 |
-
with torch.no_grad():
|
| 38 |
-
axis_logits, _, _ = model(tokens_tensor)
|
| 39 |
-
axis_predictions = torch.sigmoid(axis_logits)
|
| 40 |
-
|
| 41 |
-
print(axis_predictions)
|
| 42 |
-
```
|
| 43 |
-
|
| 44 |
-
## Training Data
|
| 45 |
-
|
| 46 |
-
The model was trained on the `futures_dataset_v2.json` dataset, which contains 3,000 rich, multi-dimensional futures scenarios.
|
| 47 |
-
|
| 48 |
-
## Training Procedure
|
| 49 |
-
|
| 50 |
-
The model was trained for 100 epochs with a batch size of 16 and a learning rate of 1e-4. The training script `train_futures_model.py` is available in the original repository.
|
| 51 |
-
|
| 52 |
-
## Citing
|
| 53 |
-
|
| 54 |
-
If you use this model or code, please cite:
|
| 55 |
-
|
| 56 |
-
```
|
| 57 |
-
@article{futures-representation-learning,
|
| 58 |
-
title={Learning Multi-Dimensional Futures Representations with Mixture-of-Experts and State Space Models},
|
| 59 |
-
author={Your Name},
|
| 60 |
-
year={2024}
|
| 61 |
-
}
|
| 62 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Interactive Futures Model
|
| 3 |
+
emoji: 🚀
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: green
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: "6.2.0"
|
| 8 |
+
app_file: app.py
|
| 9 |
+
pinned: false
|
| 10 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
app.py
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
import torch
|
| 3 |
+
from app.load_model import load_model_and_tokenizer
|
| 4 |
+
|
| 5 |
+
# --- 1. Load Model and Tokenizer ---
|
| 6 |
+
# This is done once when the Gradio app starts.
|
| 7 |
+
try:
|
| 8 |
+
print("Loading model and tokenizer...")
|
| 9 |
+
model, tokenizer = load_model_and_tokenizer()
|
| 10 |
+
print("✅ Model and tokenizer loaded successfully.")
|
| 11 |
+
except Exception as e:
|
| 12 |
+
print(f"❌ Failed to load model: {e}")
|
| 13 |
+
model, tokenizer = None, None
|
| 14 |
+
|
| 15 |
+
# --- 2. Define the Prediction Function ---
|
| 16 |
+
# This function is called every time a user interacts with the demo.
|
| 17 |
+
def predict_futures(text):
|
| 18 |
+
"""
|
| 19 |
+
Takes raw text input, tokenizes it, gets model predictions,
|
| 20 |
+
and formats the output for the Gradio interface.
|
| 21 |
+
"""
|
| 22 |
+
if not model or not tokenizer:
|
| 23 |
+
return "Model not loaded. Please check the logs.", {}
|
| 24 |
+
|
| 25 |
+
try:
|
| 26 |
+
# a. Preprocess: Tokenize the input text
|
| 27 |
+
token_ids = tokenizer.encode(text)
|
| 28 |
+
tokens_tensor = torch.LongTensor(token_ids).unsqueeze(0) # Add batch dimension
|
| 29 |
+
|
| 30 |
+
# b. Predict: Get model's raw output (logits)
|
| 31 |
+
with torch.no_grad():
|
| 32 |
+
axis_logits, _, _ = model(tokens_tensor)
|
| 33 |
+
# c. Post-process: Apply sigmoid to get probabilities (0-1)
|
| 34 |
+
axis_predictions = torch.sigmoid(axis_logits)
|
| 35 |
+
|
| 36 |
+
# d. Format Output: Create a dictionary for the label component
|
| 37 |
+
axis_names = [
|
| 38 |
+
"Hyper-Automation", "Human-Tech Symbiosis", "Abundance", "Individualism",
|
| 39 |
+
"Community Focus", "Global Interconnectedness", "Crisis & Collapse", "Restoration & Healing",
|
| 40 |
+
"Adaptation & Resilience", "Digital Dominance", "Physical Embodiment", "Collaboration"
|
| 41 |
+
]
|
| 42 |
+
|
| 43 |
+
# Create a dictionary of {label: confidence}
|
| 44 |
+
confidences = {name: float(weight) for name, weight in zip(axis_names, axis_predictions[0])}
|
| 45 |
+
|
| 46 |
+
# You can return a simple message and the formatted labels
|
| 47 |
+
return "Prediction complete.", confidences
|
| 48 |
+
|
| 49 |
+
except Exception as e:
|
| 50 |
+
print(f"Error during prediction: {e}")
|
| 51 |
+
return f"An error occurred: {e}", {}
|
| 52 |
+
|
| 53 |
+
# --- 3. Create and Launch the Gradio Interface ---
|
| 54 |
+
print("Creating Gradio interface...")
|
| 55 |
+
|
| 56 |
+
# Define the input and output components
|
| 57 |
+
input_text = gr.Textbox(
|
| 58 |
+
lines=5,
|
| 59 |
+
label="Input Scenario",
|
| 60 |
+
placeholder="Describe a future scenario here..."
|
| 61 |
+
)
|
| 62 |
+
|
| 63 |
+
output_text = gr.Textbox(label="Status")
|
| 64 |
+
output_labels = gr.Label(label="Predicted Axis Weights", num_top_classes=12)
|
| 65 |
+
|
| 66 |
+
# Build the interface
|
| 67 |
+
demo = gr.Interface(
|
| 68 |
+
fn=predict_futures,
|
| 69 |
+
inputs=input_text,
|
| 70 |
+
outputs=[output_text, output_labels],
|
| 71 |
+
title="Futures Prediction Model",
|
| 72 |
+
description=(
|
| 73 |
+
"Explore multi-dimensional futures. "
|
| 74 |
+
"Write a text describing a potential future scenario and see how the model scores it "
|
| 75 |
+
"across 12 different axes, from 'Hyper-Automation' to 'Crisis & Collapse'."
|
| 76 |
+
),
|
| 77 |
+
examples=[
|
| 78 |
+
["In a future dominated by hyper-automation, societal structures adapt to new forms of labor and community."],
|
| 79 |
+
["Coastal cities adopt divergent strategies as sea levels rise. Singapore invests in autonomous seawall monitoring, while Jakarta facilitates managed retreat."],
|
| 80 |
+
["A global pandemic leads to a surge in community-focused initiatives and a renewed appreciation for local supply chains."]
|
| 81 |
+
]
|
| 82 |
+
)
|
| 83 |
+
|
| 84 |
+
if __name__ == "__main__":
|
| 85 |
+
print("Launching Gradio demo...")
|
| 86 |
+
# The launch() command creates a shareable link to the demo.
|
| 87 |
+
demo.launch()
|