--- license: apache-2.0 tags: - biology - genomics - single-cell library_name: transformers --- # TXModel - Hub-Ready Version **Zero-hassle deployment!** Requires ONLY: ```bash pip install transformers torch safetensors ``` ## 🚀 Quick Start ```python from transformers import AutoModel import torch # Load from Hub (one command!) model = AutoModel.from_pretrained( "your-username/model-name", trust_remote_code=True ) # Use immediately genes = torch.randint(0, 100, (2, 10)) values = torch.rand(2, 10) masks = torch.ones(2, 10).bool() model.eval() with torch.no_grad(): output = model(genes=genes, values=values, gen_masks=masks) print(output.last_hidden_state.shape) # [2, 10, d_model] ``` ## ✨ Features - ✅ **Single file** - all code in `modeling.py` - ✅ **Zero dependencies** (except transformers + torch) - ✅ **Works with AutoModel** out of the box - ✅ **No import errors** - everything self-contained ## 📦 Installation ```bash pip install transformers torch safetensors ``` That's it! ## 🎯 Usage ### Basic Inference ```python from transformers import AutoModel model = AutoModel.from_pretrained( "your-username/model-name", trust_remote_code=True ) # Move to GPU if available device = "cuda" if torch.cuda.is_available() else "cpu" model = model.to(device) ``` ### Batch Processing ```python # Your data batch = { 'genes': torch.randint(0, 1000, (32, 100)), 'values': torch.rand(32, 100), 'masks': torch.ones(32, 100).bool() } # Process model.eval() with torch.no_grad(): output = model(**batch) ``` ## 📊 Model Details - **Parameters**: ~70M - **Architecture**: Transformer Encoder - **Hidden Size**: 512 - **Layers**: 12 - **Heads**: 8 ## 📝 Citation ```bibtex @article{tahoe2024, title={Tahoe-x1}, author={...}, year={2024} } ```