PredoniaV2.1

This is different merge method and seems to have better performance.

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: E:\AI\Precog
    parameters:
      density: [1.0, 0.75, 0.50] # density gradient
      weight: 1.0
  - model: E:\AI\Cydonia 4.3
    parameters:
      density: 0.35
      weight: [0, 0.3, 0.4, 0.5] # weight gradient
merge_method: ties
base_model: E:\AI\Precog
parameters:
  normalize: true
  int8_mask: true
dtype: float16
Downloads last month
80
Safetensors
Model size
24B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Ateron/Predonia-24B-V2.1

Merge model
this model
Quantizations
2 models