YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

The following 4 models are up for testing:

  • UNNAMED-MODEL-A: SCE Merge
  • UNNAMED-MODEL-B: Dare_Ties Merge
  • UNNAMED-MODEL-C: Dare_Ties Merge (alternative order)
  • UNNAMED-MODEL-D: Della (Ties) Merge

These models are made up of the following models:

SMART MODEL: TareksLab/Erudite-V1-Unleashed-LLaMA-70B

To make this model I started with base with the following multilingual models, which were NEARSWAPPED in the following order to create TareksLab/Polyglot-V2-LLaMa-70B:

VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct > 
CYFRAGOVPL/Llama-PLLuM-70B-chat >
ensec/Llama3-70B-EnSecAI-Ru-Chat >
tokyotech-llm/Llama-3.3-Swallow-70B-v0.4 >
OpenBuddy/openbuddy-llama3.3-70b-v24.1-131k

The reasoning behind this is from something I read concerning mulitlingual models being smarter and scoring higher on leaderboards because they are trained on varied linguistic patterns, these models capture deeper semantic structures that aren’t tied to one language’s idiosyncrasies. This diversity proves beneficial when merging models, as the merged model can align knowledge from different sources more coherently.

I then used Polyglot as the base for a MODEL_STOCK merge with the following models to make TareksLab/Erudite-V1-Leashed-LLaMA-70B:

- nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- NousResearch/Hermes-3-Llama-3.1-70B
- pankajmathur/orca_mini_v8_1_70b
- allenai/Llama-3.1-Tulu-3-70B

My thinking this time was to combine these smarter models which each had the bonus of having slightly divergent styles from base LLaMa, reducing LLaMa-isms and some of the more common LLaMa slop.

Once that was done, I used TASK_ARITHMETIC merge method to lorablate it with mlabonne/Llama-3-70B-Instruct-abliterated-LORA, ensuring it retained most of its intelligence, but lost it's rather heavy censorship. The result:

TareksLab/Erudite-V1-Unleashed-LLaMA-70B

ROLE-PLAY MODEL: TareksLab/RolePlayer-V4-LLaMa-70B

For this model I started with a DELLA_LINEAR merge of the following models to create: TareksLab/Doppleganger-V3-LLaMa-70B

- SicariusSicariiStuff/Negative_LLAMA_70B (BASE)
- SentientAGI/Dobby-Unhinged-Llama-3.3-70B
- flammenai/Mahou-1.5-llama3.1-70B
- flammenai/Llama3.1-Flammades-70B

The models above, with the exception of Negative_LLAMA, were all designed to be conversational assistants, embodying roles given and interacting with realistic dialogue. My hope was to have this carry over into the RP Model.

I then made TareksLab/RolePlayer-V4-LLaMa-70B with a DELLA_LINEAR merge of the following:

- LatitudeGames/Wayfarer-Large-70B-Llama-3.3 (BASE)
- TareksLab/Doppleganger-V3-LLaMa-70B
- Sao10K/Llama-3.3-70B-Vulpecula-r1
- ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4

CREATIVE WRITING MODEL: TareksLab/Wordsmith-V2.0-LLaMa-70B

My goal here was to have great prose with good creativity. To that end I did an SCE merge of the following models:

- huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- Sao10K/L3.1-70B-Hanami-x1
- Sao10K/70B-L3.3-mhnnn-x1
- nbeerbower/Llama3.1-Gutenberg-Doppel-70B (BASE)

UNHINGED MODEL: TareksLab/Anathema-V2-LLaMA-70B

This should be no surprise but this model was the hardest to balance out. I did an SCE merge of the following models:

- nbeerbower/Llama-3.1-Nemotron-lorablated-70B (BASE)
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- ReadyArt/Forgotten-Safeword-70B-3.6
- ReadyArt/Fallen-Safeword-70B-R1-v4.1
- allura-org/Bigger-Body-70b
- ReadyArt/Fallen-Abomination-70B-R1-v4.1

BASE MODEL: TareksLab/Scrivener-Base-V4-LLaMA-70B

For the base I made a model that reinforces the creativity, prose and intelligence of the other models. I did an SCE merge of the following models:

- Sao10K/L3-70B-Euryale-v2.1
- SicariusSicariiStuff/Negative_LLAMA_70B
- mlabonne/Hermes-3-Llama-3.1-70B-lorablated
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B (BASE)
Downloads last month
2
GGUF
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support