| base_model: microsoft/rho-math-1b-interpreter-v0.1 | |
| library_name: transformers | |
| license: mit | |
| tags: | |
| - mistral | |
| - 4-bit | |
| - AWQ | |
| - text-generation | |
| - autotrain_compatible | |
| - endpoints_compatible | |
| - chatml | |
| - nlp | |
| - math | |
| language: | |
| - en | |
| pipeline_tag: text-generation | |
| inference: false | |
| quantized_by: Suparious | |
| # microsoft/rho-math-7b-v0.1 AWQ | |
| - Model creator: [microsoft](https://huggingface.co/microsoft) | |
| - Original model: [rho-math-1b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | |
| ## Model summary | |
| Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution. | |