base_model: - netcat420/MFANN3bv0.3 - netcat420/MFANN3bv0.7 - liminerity/Phigments12 - netcat420/MFANN3bv0.6 library_name: transformers tags: - mergekit - merge license: apache-2.0

MFANN3bv0.7.10

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using liminerity/Phigments12 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: netcat420/MFANN3bv0.6
    parameters:
      density: [1, 0.7, 0.1] # density gradient
      weight: 1.0
  - model: netcat420/MFANN3bv0.3
    parameters:
      density: 0.5
      weight: [0, 0.3, 0.7, 1] # weight gradient
  - model: netcat420/MFANN3bv0.7
    parameters:
      density: 0.33
      weight:
        - filter: mlp
          value: 0.5
        - value: 0
merge_method: ties
base_model: liminerity/Phigments12
parameters:
  normalize: true
  int8_mask: true
dtype: float16

System prompt

### System:
### thought-process:
Downloads last month
-
GGUF
Model size
3B params
Architecture
phi2
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for netcat420/MFANN3bv0.7.10-GGUF