--- base_model: - yamatazen/LorablatedStock-12B - yamatazen/EtherealAurora-12B-v2 tags: - merge - mergekit - lazymergekit - yamatazen/LorablatedStock-12B - yamatazen/EtherealAurora-12B-v2 --- Model Image # AbyssSynth-12B She holds the weight of a galaxy, a fragile infinity encased within a sphere of shimmering light.
Darkness swirls beneath endless stars, vast and unfathomable, yet tempered by her calm grasp.
Not a force of chaos, but of silent order, the abyss contained, a cosmos in delicate balance, where creation and oblivion meet.
## 🔧 Recommended Sampling Settings: ```yaml Temperature: 0.75 to 1.25 Min P: 0.035 Context Length: Stable at 12k tokens, with possible support for extended contexts ``` ## 💬 Prompt Format Supports ChatML style messages. Example: ```yaml <|im_start|>user Your question here. <|im_end|> <|im_start|>assistant ``` AbyssSynth-12B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): ## 🧩 Configuration ```yaml merge_method: ties base_model: yamatazen/LorablatedStock-12B models: - model: yamatazen/LorablatedStock-12B parameters: weight: 0.65 density: 1.0 - model: yamatazen/EtherealAurora-12B-v2 parameters: weight: 0.35 density: 1.0 parameters: normalize: false int8_mask: false dtype: bfloat16 layer_parameters: - filter: "attn" sources: - model: yamatazen/LorablatedStock-12B weight: 0.6 - model: yamatazen/EtherealAurora-12B-v2 weight: 0.4 - filter: "mlp" sources: - model: yamatazen/LorablatedStock-12B weight: 0.55 - model: yamatazen/EtherealAurora-12B-v2 weight: 0.45 - filter: "embed_tokens" sources: - model: yamatazen/LorablatedStock-12B weight: 1.0 - model: yamatazen/EtherealAurora-12B-v2 weight: 0.0 - filter: "layer_norm" sources: - model: yamatazen/LorablatedStock-12B weight: 0.7 - model: yamatazen/EtherealAurora-12B-v2 weight: 0.3 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Marcjoni/AbyssSynth-12B-12B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=1, top_k=0, top_p=1) print(outputs[0]["generated_text"]) ```