Feature Extraction
Transformers
Safetensors
sdar
llama-factory
full
Generated from Trainer
custom_code
Instructions to use autoprogrammer/sdar_4b_random_mask-final with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use autoprogrammer/sdar_4b_random_mask-final with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="autoprogrammer/sdar_4b_random_mask-final", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("autoprogrammer/sdar_4b_random_mask-final", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
| { | |
| "effective_tokens_per_sec": 2253.3017374501496, | |
| "epoch": 3.0, | |
| "total_flos": 2.7205837452948275e+17, | |
| "train_loss": 0.11475353346251355, | |
| "train_runtime": 474.0913, | |
| "train_samples_per_second": 47.288, | |
| "train_steps_per_second": 0.74 | |
| } |