|
|
--- |
|
|
base_model: |
|
|
- Qwen/Qwen3-8B |
|
|
library_name: transformers |
|
|
license: mit |
|
|
pipeline_tag: text-generation |
|
|
--- |
|
|
|
|
|
# Model Card for SubconsciousDev/TIM-8b-preview |
|
|
|
|
|
TIM is a model that reasons on recursive task trees formatted as JSON structures. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
- **Developed by:** MIT and Subconscious |
|
|
- **Model type:** Structural reasoning model |
|
|
- **License:** MIT License |
|
|
- **Finetuned from model [optional]:** Qwen/Qwen3-8b |
|
|
|
|
|
### Model Sources |
|
|
|
|
|
- **Repository:** [TIMRUN](https://github.com/subconscious-systems/TIMRUN) |
|
|
- **Paper:** [Beyond Context Limits: Subconscious Threads for Long-Horizon Reasoning](https://arxiv.org/pdf/2507.16784) |
|
|
- **Demo:** [Subconscious API platform](https://www.subconscious.dev/) |
|
|
|
|
|
## Sample Usage |
|
|
|
|
|
You can use this model with the `transformers` library, leveraging `trust_remote_code=True`. |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
import torch |
|
|
|
|
|
# Load the model and tokenizer |
|
|
model_name = "SubconsciousDev/TIM-8b-preview" |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) |
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_name, |
|
|
torch_dtype=torch.bfloat16, # Use torch.float16 for GPUs that don't support bfloat16 |
|
|
device_map="auto", |
|
|
trust_remote_code=True |
|
|
) |
|
|
|
|
|
# Example: Simple text generation |
|
|
prompt_text = "What is the capital of France?" |
|
|
input_ids = tokenizer(prompt_text, return_tensors="pt").input_ids.to(model.device) |
|
|
|
|
|
output_ids = model.generate(input_ids, max_new_tokens=50, do_sample=True, temperature=0.7) |
|
|
response = tokenizer.decode(output_ids[0], skip_special_tokens=True) |
|
|
|
|
|
print(response) |
|
|
``` |