Morbid v0.2.0 - Enterprise Insurance AI
A 22B parameter LLM fine-tuned for health and life insurance applications, built on Mistral Small Instruct.
Model Details
- Base Model: mistralai/Mistral-Small-Instruct-2409
- Parameters: 22B
- Training: Supervised Fine-Tuning (SFT) with LoRA on insurance/actuarial dataset
- License: Apache 2.0
- Developed by: MorbidCorp
Capabilities
Insurance & Actuarial
- Life insurance products (term, whole, universal, variable)
- Health insurance (medical, dental, disability, LTC)
- Premium calculations and rate setting
- Underwriting and risk classification
- Claims analysis and management
- Regulatory compliance (NAIC, state/federal)
Actuarial Mathematics
- Mortality tables and life expectancy calculations
- Present value and annuity calculations
- Reserve valuation
- Risk assessment and modeling
Medical Classification
- ICD-10 code lookup and explanation
- Cause-of-death classification
- Medical terminology
Usage
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "MorbidCorp/Morbid-22B-Insurance-v020"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Mistral Instruct format
system = """You are Morbi, an expert AI assistant specializing in health and life insurance, actuarial science, and risk analysis."""
user_msg = "What is the life expectancy for a 50-year-old male in the US?"
prompt = f"<s>[INST] {system}\n\n{user_msg} [/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7, top_p=0.9)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
With Transformers Pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="MorbidCorp/Morbid-22B-Insurance-v020",
torch_dtype="bfloat16",
device_map="auto"
)
messages = [
{"role": "user", "content": "Explain the difference between term and whole life insurance."}
]
output = pipe(messages, max_new_tokens=512)
print(output[0]["generated_text"][-1]["content"])
Training Data
The model was fine-tuned on a curated dataset including:
- Insurance product documentation and explanations
- Actuarial exam questions and solutions (SOA P, FM, IFM)
- Life expectancy and mortality data
- ICD-10 WHO 2019 medical classification codes
- Underwriting guidelines and risk assessment scenarios
- Regulatory compliance documentation
Limitations
- This model is for informational purposes only
- Not a substitute for licensed professional advice
- Should not be used for final underwriting decisions
- May not reflect the most current regulatory requirements
- Life expectancy estimates are population averages, not individual predictions
Hardware Requirements
- Minimum: 40GB VRAM (A100 40GB, A6000)
- Recommended: 80GB VRAM (A100 80GB, H100)
- Quantized (AWQ/GPTQ 4-bit): 24GB VRAM (RTX 4090, A10G)
Citation
@misc{morbid2026,
title={Morbid v0.2.0: Enterprise Insurance AI},
author={MorbidCorp},
year={2026},
publisher={HuggingFace},
url={https://huggingface.co/MorbidCorp/Morbid-22B-Insurance-v020}
}
Contact
For enterprise support and customization: MORBID.AI
- Downloads last month
- 27
Model tree for MorbidCorp/Morbid-22B-Insurance-v020
Base model
mistralai/Mistral-Small-Instruct-2409