| base_model: Locutusque/Selocan-2x7B-v1 | |
| inference: false | |
| library_name: transformers | |
| license: apache-2.0 | |
| merged_models: | |
| - TURKCELL/Turkcell-LLM-7b-v1 | |
| - NovusResearch/Novus-7b-tr_v1 | |
| pipeline_tag: text-generation | |
| quantized_by: Suparious | |
| tags: | |
| - 4-bit | |
| - AWQ | |
| - text-generation | |
| - autotrain_compatible | |
| - endpoints_compatible | |
| - moe | |
| - frankenmoe | |
| - merge | |
| - mergekit | |
| - lazymergekit | |
| - TURKCELL/Turkcell-LLM-7b-v1 | |
| - NovusResearch/Novus-7b-tr_v1 | |
| # ozayezerceli/Selocan-2x7B-v1 AWQ | |
| - Model creator: [ozayezerceli](https://huggingface.co/ozayezerceli) | |
| - Original model: [Selocan-2x7B-v1](https://huggingface.co/Locutusque/Selocan-2x7B-v1) | |
| ## Model Summary | |
| Selocan-2x7B-v1 is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): | |
| * [TURKCELL/Turkcell-LLM-7b-v1](https://huggingface.co/TURKCELL/Turkcell-LLM-7b-v1) | |
| * [NovusResearch/Novus-7b-tr_v1](https://huggingface.co/NovusResearch/Novus-7b-tr_v1) | |