File size: 1,833 Bytes
e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b 27f1bc1 e093a4b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
license: apache-2.0
tags:
- biology
- genomics
- single-cell
library_name: transformers
---
# TXModel - Hub-Ready Version
**Zero-hassle deployment!** Requires ONLY:
```bash
pip install transformers torch safetensors
```
## π Quick Start
```python
from transformers import AutoModel
import torch
# Load from Hub (one command!)
model = AutoModel.from_pretrained(
"your-username/model-name",
trust_remote_code=True
)
# Use immediately
genes = torch.randint(0, 100, (2, 10))
values = torch.rand(2, 10)
masks = torch.ones(2, 10).bool()
model.eval()
with torch.no_grad():
output = model(genes=genes, values=values, gen_masks=masks)
print(output.last_hidden_state.shape) # [2, 10, d_model]
```
## β¨ Features
- β
**Single file** - all code in `modeling.py`
- β
**Zero dependencies** (except transformers + torch)
- β
**Works with AutoModel** out of the box
- β
**No import errors** - everything self-contained
## π¦ Installation
```bash
pip install transformers torch safetensors
```
That's it!
## π― Usage
### Basic Inference
```python
from transformers import AutoModel
model = AutoModel.from_pretrained(
"your-username/model-name",
trust_remote_code=True
)
# Move to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
```
### Batch Processing
```python
# Your data
batch = {
'genes': torch.randint(0, 1000, (32, 100)),
'values': torch.rand(32, 100),
'masks': torch.ones(32, 100).bool()
}
# Process
model.eval()
with torch.no_grad():
output = model(**batch)
```
## π Model Details
- **Parameters**: ~70M
- **Architecture**: Transformer Encoder
- **Hidden Size**: 512
- **Layers**: 12
- **Heads**: 8
## π Citation
```bibtex
@article{tahoe2024,
title={Tahoe-x1},
author={...},
year={2024}
}
```
|