metadata
license: mit
datasets:
- Taylor658/titan-hohmann-transfer-orbit
language:
- en
base_model:
- mistralai/Pixtral-12B-Base-2409
tags:
- mistral
- pixtral
- vlm
- multimodal
- image-text-to-text
- orbital-mechanics
- hohmann-transfer-orbits
language: - en license: mit library_name: transformers pipeline_tag: image-text-to-text base_model: mistralai/Pixtral-12B-Base-2409 datasets: - Taylor658/titan-hohmann-transfer-orbit
Pixtral 12B Fine-Tuned on Titan-Hohmann-Transfer-Orbit
Updated to the latest suitable Mistral multimodal base:
mistralai/Pixtral-12B-Base-2409.
Overview
Fine-tuned variant of Pixtral 12B for orbital mechanics with emphasis on Hohmann transfer orbits. Supports multimodal (image + text) inputs and text outputs.
Model Details
- Base:
mistralai/Pixtral-12B-Base-2409 - Type: Multimodal (Vision + Text)
- Params: ~12B (decoder) + vision encoder
- Languages: English
- License: MIT
Intended Use
- Hohmann transfer ∆v estimation
- Transfer-time approximations
- Orbit analysis aids and reasoning
Quickstart
vLLM (multimodal)
from vllm import LLM
from vllm.sampling_params import SamplingParams
llm = LLM(model="mistralai/Pixtral-12B-Base-2409", tokenizer_mode="mistral")
sampling = SamplingParams(max_tokens=512, temperature=0.2)
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "Given this diagram, estimate the delta-v for a Hohmann transfer to Titan."},
{"type": "image_url", "image_url": {"url": "https://example.com/orbit_diagram.png"}}
]
}
]
resp = llm.chat(messages, sampling_params=sampling)
print(resp[0].outputs[0].text)
Transformers (text-only demo)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "mistralai/Pixtral-12B-Base-2409"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
prompt = "Compute approximate delta-v for a Hohmann transfer to Titan. State assumptions."
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=512, temperature=0.2)
print(tok.decode(out[0], skip_special_tokens=True))
Training Data
- Dataset:
Taylor658/titan-hohmann-transfer-orbit - Modalities: text (explanations), code (snippets), images (orbital diagrams)
Limitations
- Optimized for Hohmann transfers and related reasoning
- Requires sufficient GPU VRAM for best throughput
Acknowledgements
- Base model by Mistral AI (Pixtral 12B)
- Dataset by A Taylor
Contact Information
- Author: A Taylor
- Repository: https://github.com/ATaylorAerospace/HohmannHET