MakeGemma3
This is a merge of pre-trained language models created using mergekit. This integrated model significantly improves multilingual support and linguistic consistency, addressing the shortcomings of my previously submitted integrated model. It leverages general knowledge and medical expertise to provide diverse information. Furthermore, through multi-stage model integration, it preserves the uncensored nature of the original model. When interpreting visual content, system prompts must be used to encourage careful analysis. I have created a GGUF, so please check the GGUF folder. Performance evaluation of this model's mmproj is currently under verification, so it might be best if you perform the conversion yourself. Note:After testing, I found that it would be best to use the mmproj used by the general-purpose Gemma3.The mmproj file has been updated.I recommend replacing mmproj, or use the mmproj from Gemma-3-27b-it.
Merge Details
Merge Method
This model was merged using the NuSLERP merge method.
Models Merged
The following models were included in the merge:
- MakeGemma3 models were included in the merge: (MakeGemma3)- * drwlf/medgemma-27b-it-abliterated (MakeGemma3)- * test_base (summykai/gemma3-27b-abliterated-dpo with additional layers added)
- Gemma3 (unsloth models gemma3 and MedGemma)
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Gemma3
parameters:
weight: 1.618033988749
- model: MakeGemma3
parameters:
weight: 1.0
merge_method: nuslerp
tokenizer_source: unsloth/gemma-3-27b-it
dtype: bfloat16
- Downloads last month
- 146