How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "FallenMerick/Bionic-Cetacean-20B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "FallenMerick/Bionic-Cetacean-20B",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/FallenMerick/Bionic-Cetacean-20B
Quick Links

pic

Bionic-Cetacean-20B

In the same vein as the legendary Psyonic-Cetacean-20B, I have attempted to create a 20B model that is equal parts creative and chaotic, while still remaining coherent enough for roleplaying purposes.
The three components used to create Bionic-Vaquita-13B were also used to create this stack.
Creativity and coherency were the primary focus of the late-stage manual testing that led to selecting this model.

This is a merge of pre-trained language models created using mergekit.

GGUF Quants:

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: FallenMerick/Psyfighter2-Orca2-Erebus3
      layer_range: [0, 13]
  - sources:
    - model: FallenMerick/XNoroChronos-Orca2-Noromaid
      layer_range: [8, 26]
  - sources:
    - model: FallenMerick/EstopianMaid-Orca2-MlewdBoros
      layer_range: [14, 32]
  - sources:
    - model: FallenMerick/Psyfighter2-Orca2-Erebus3
      layer_range: [27, 40]
merge_method: passthrough
dtype: bfloat16
Downloads last month
52
Safetensors
Model size
20B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for FallenMerick/Bionic-Cetacean-20B

Collection including FallenMerick/Bionic-Cetacean-20B