--- base_model: - Novaciano/qp-3.2-1B - syvai/emotion-reasoning-1b library_name: transformers tags: - mergekit - merge --- # Pyromancer 3.2 1B
![image/png](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcT8wjOTgy89I2I_S490bC1upZAPcqnMl3lObVDOF788vSmbk6NdWjuRWQY&s=10)
## Merge Details ### Merge Method This model was merged using the [Arcee Fusion](https://arcee.ai) merge method using [Novaciano/qp-3.2-1B](https://huggingface.co/Novaciano/qp-3.2-1B) as a base. ### Models Merged The following models were included in the merge: * [syvai/emotion-reasoning-1b](https://huggingface.co/syvai/emotion-reasoning-1b) ### Configuration The following YAML configuration was used to produce this model: ```yaml # Author: Dr. Novaciano # Objective: Uncensored TEST Emotional 3.2 AI Model # PROJECT: Pyromancer-3.2-1B dtype: float32 out_dtype: bfloat16 merge_method: arcee_fusion base_model: Novaciano/qp-3.2-1B models: - model: Novaciano/qp-3.2-1B parameters: weight: - filter: attention value: 1.2 - filter: mlp value: 1.3 - value: 1 - model: syvai/emotion-reasoning-1b parameters: weight: - filter: lm_head value: 0.2 - filter: attention value: 0.5 - value: 0.4 tie_word_embeddings: true tie_output_embeddings: true ```