Titan-Hohmann / README.md
Taylor658's picture
Update README.md (#11)
0b68acc verified
metadata
license: mit
datasets:
  - Taylor658/titan-hohmann-transfer-orbit
language:
  - en
base_model:
  - mistralai/Pixtral-12B-Base-2409
tags:
  - mistral
  - pixtral
  - vlm
  - multimodal
  - image-text-to-text
  - orbital-mechanics
  - hohmann-transfer-orbits
library_name: transformers
pipeline_tag: image-text-to-text
model_type: pixtral

πŸš€ Pixtral 12B Fine-Tuned on Titan-Hohmann-Transfer-Orbit

✨ Updated tomistralai/Pixtral-12B-Base-2409.

🌟 Overview

Fine-tuned variant of Pixtral 12B for orbital mechanics with emphasis on Hohmann transfer orbits. Supports multimodal (image + text) inputs and text outputs.

πŸ”§ Model Details

  • Base: mistralai/Pixtral-12B-Base-2409
  • Type: πŸ–ΌοΈ Multimodal (Vision + Text)
  • Params: ~12B (decoder) + vision encoder
  • Languages: πŸ‡ΊπŸ‡Έ English
  • License: πŸ“„ MIT

🎯 Intended Use

  • πŸ›°οΈ Hohmann transfer βˆ†v estimation
  • ⏱️ Transfer-time approximations
  • πŸ” Orbit analysis aids and reasoning

πŸš€ Quickstart

🌐 vLLM (multimodal)

from vllm import LLM
from vllm.sampling_params import SamplingParams

llm = LLM(model="mistralai/Pixtral-12B-Base-2409", tokenizer_mode="mistral")
sampling = SamplingParams(max_tokens=512, temperature=0.2)

messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Given this diagram, estimate the delta-v for a Hohmann transfer to Titan."},
            {"type": "image_url", "image_url": {"url": "https://example.com/orbit_diagram.png"}}
        ]
    }
]
resp = llm.chat(messages, sampling_params=sampling)
print(resp[0].outputs[0].text)

πŸ€— Transformers (text-only demo)

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "mistralai/Pixtral-12B-Base-2409"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")

prompt = "Compute approximate delta-v for a Hohmann transfer to Titan. State assumptions."
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=512, temperature=0.2)
print(tok.decode(out[0], skip_special_tokens=True))

πŸ“Š Training Data

  • Dataset: Taylor658/titan-hohmann-transfer-orbit
  • Modalities: πŸ“ text (explanations), πŸ’» code (snippets), πŸ–ΌοΈ images (orbital diagrams)

⚠️ Limitations

  • 🎯 Optimized for Hohmann transfers and related reasoning
  • πŸ’Ύ Requires sufficient GPU VRAM for best throughput

πŸ™ Acknowledgements

  • Base model by Mistral AI (Pixtral 12B)
  • Dataset by A Taylor

πŸ“ž Contact Information