|
|
--- |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- biology |
|
|
- genomics |
|
|
- single-cell |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
# TXModel - Hub-Ready Version |
|
|
|
|
|
**Zero-hassle deployment!** Requires ONLY: |
|
|
```bash |
|
|
pip install transformers torch safetensors |
|
|
``` |
|
|
|
|
|
## π Quick Start |
|
|
|
|
|
```python |
|
|
from transformers import AutoModel |
|
|
import torch |
|
|
|
|
|
# Load from Hub (one command!) |
|
|
model = AutoModel.from_pretrained( |
|
|
"your-username/model-name", |
|
|
trust_remote_code=True |
|
|
) |
|
|
|
|
|
# Use immediately |
|
|
genes = torch.randint(0, 100, (2, 10)) |
|
|
values = torch.rand(2, 10) |
|
|
masks = torch.ones(2, 10).bool() |
|
|
|
|
|
model.eval() |
|
|
with torch.no_grad(): |
|
|
output = model(genes=genes, values=values, gen_masks=masks) |
|
|
|
|
|
print(output.last_hidden_state.shape) # [2, 10, d_model] |
|
|
``` |
|
|
|
|
|
## β¨ Features |
|
|
|
|
|
- β
**Single file** - all code in `modeling.py` |
|
|
- β
**Zero dependencies** (except transformers + torch) |
|
|
- β
**Works with AutoModel** out of the box |
|
|
- β
**No import errors** - everything self-contained |
|
|
|
|
|
## π¦ Installation |
|
|
|
|
|
```bash |
|
|
pip install transformers torch safetensors |
|
|
``` |
|
|
|
|
|
That's it! |
|
|
|
|
|
## π― Usage |
|
|
|
|
|
### Basic Inference |
|
|
|
|
|
```python |
|
|
from transformers import AutoModel |
|
|
|
|
|
model = AutoModel.from_pretrained( |
|
|
"your-username/model-name", |
|
|
trust_remote_code=True |
|
|
) |
|
|
|
|
|
# Move to GPU if available |
|
|
device = "cuda" if torch.cuda.is_available() else "cpu" |
|
|
model = model.to(device) |
|
|
``` |
|
|
|
|
|
### Batch Processing |
|
|
|
|
|
```python |
|
|
# Your data |
|
|
batch = { |
|
|
'genes': torch.randint(0, 1000, (32, 100)), |
|
|
'values': torch.rand(32, 100), |
|
|
'masks': torch.ones(32, 100).bool() |
|
|
} |
|
|
|
|
|
# Process |
|
|
model.eval() |
|
|
with torch.no_grad(): |
|
|
output = model(**batch) |
|
|
``` |
|
|
|
|
|
## π Model Details |
|
|
|
|
|
- **Parameters**: ~70M |
|
|
- **Architecture**: Transformer Encoder |
|
|
- **Hidden Size**: 512 |
|
|
- **Layers**: 12 |
|
|
- **Heads**: 8 |
|
|
|
|
|
## π Citation |
|
|
|
|
|
```bibtex |
|
|
@article{tahoe2024, |
|
|
title={Tahoe-x1}, |
|
|
author={...}, |
|
|
year={2024} |
|
|
} |
|
|
``` |
|
|
|