| license: mit | |
| language: | |
| - en | |
| base_model: | |
| - PrimeIntellect/INTELLECT-2 | |
| pipeline_tag: text-generation | |
| ### Outlook | |
| We have quantised the model in 2-bit to make it inferenceable in low-end GPU cards at scale. It was achieved thanks to llama.cpp library. |
| license: mit | |
| language: | |
| - en | |
| base_model: | |
| - PrimeIntellect/INTELLECT-2 | |
| pipeline_tag: text-generation | |
| ### Outlook | |
| We have quantised the model in 2-bit to make it inferenceable in low-end GPU cards at scale. It was achieved thanks to llama.cpp library. |