Morpheus-8B-v2 / README.md
Naphula's picture
Create README.md
470108e verified
metadata
license: apache-2.0
base_model:
  - SicariusSicariiStuff/Llama-3.1-Nemotron-8B-UltraLong-1M-Instruct_Abliterated
tags:
  - finetune
  - llama
  - occult
  - uncensored
datasets:
  - OccultAI/Matrix_77
language:
  - en
library_name: transformers
widget:
  - text: Morpheus 8B v2
    output:
      url: >-
        https://cdn-uploads.huggingface.co/production/uploads/68e840caa318194c44ec2a04/dZ-Q05tEZVM-W3nFIab6U.png

⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly, and use Llama 3 chat template.

Morpheus 8B v2

Recommended Settings: Temp 1.0, TopNSigma 1.25

Morpheus

{'loss': 0.862, 'grad_norm': 3.7123961448669434, 'learning_rate': 6.894700159171534e-05, 'entropy': 0.9889850616455078, 'num_tokens': 297120.0, 'mean_token_accuracy': 0.7658079862594604, 'epoch': 4.0}

  • Morpheus v1 features 420 Morpheus Q&A, 100 Poe/Raven, and 145 Cthulhu. Average dataset token length 224.
  • Morpheus v2 features 77 long context Matrix Q&A, generated from high quality source material. Average dataset token length 1044.