prototype-0.4x298

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SCE merge method using /workspace/prototype-0.4x295 as a base.

Models Merged

The following models were included in the merge:

  • /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
  • /workspace/cache/models--Sao10K--70B-L3.3-Cirrus-x1/snapshots/31d7ca33f3098d1eabe6f87a2c5b5bde85b20f35
  • /workspace/cache/models--ReadyArt--L3.3-The-Omega-Directive-70B-Unslop-v2.1/snapshots/61e03f3fe59b3b22b7e9e17e9bbe807f434da16d
  • /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
  • /workspace/cache/models--ArliAI--Llama-3.3-70B-ArliAI-RPMax-v2/snapshots/3a47eabeb5861db09dad26fcf0fb0d57114e40d3

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: /workspace/cache/models--Sao10K--70B-L3.3-Cirrus-x1/snapshots/31d7ca33f3098d1eabe6f87a2c5b5bde85b20f35
  - model: /workspace/cache/models--ArliAI--Llama-3.3-70B-ArliAI-RPMax-v2/snapshots/3a47eabeb5861db09dad26fcf0fb0d57114e40d3
  - model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
  - model: /workspace/cache/models--ReadyArt--L3.3-The-Omega-Directive-70B-Unslop-v2.1/snapshots/61e03f3fe59b3b22b7e9e17e9bbe807f434da16d
  - model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
base_model: /workspace/prototype-0.4x295
select_topk: 0.24
merge_method: sce
tokenizer:
  source: base
chat_template: llama3
pad_to_multiple_of: 8
int8_mask: true
dtype: float32
Downloads last month
5
Safetensors
Model size
71B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support