Update README.md

#10
by Taylor658 - opened
Files changed (1) hide show
  1. README.md +31 -37
README.md CHANGED
@@ -14,42 +14,36 @@ tags:
14
  - image-text-to-text
15
  - orbital-mechanics
16
  - hohmann-transfer-orbits
17
- ---
18
- language:
19
- - en
20
- license: mit
21
  library_name: transformers
22
  pipeline_tag: image-text-to-text
23
- base_model: mistralai/Pixtral-12B-Base-2409
24
- datasets:
25
- - Taylor658/titan-hohmann-transfer-orbit
26
  ---
27
 
28
- # Pixtral 12B Fine-Tuned on Titan-Hohmann-Transfer-Orbit
29
 
30
- > Updated to the latest suitable Mistral multimodal base: `mistralai/Pixtral-12B-Base-2409`.
31
 
32
- ## Overview
33
 
34
- Fine-tuned variant of Pixtral 12B for orbital mechanics with emphasis on Hohmann transfer orbits. Supports multimodal (image + text) inputs and text outputs.
35
 
36
- ## Model Details
37
 
38
- - Base: `mistralai/Pixtral-12B-Base-2409`
39
- - Type: Multimodal (Vision + Text)
40
- - Params: ~12B (decoder) + vision encoder
41
- - Languages: English
42
- - License: MIT
43
 
44
- ## Intended Use
45
 
46
- - Hohmann transfer βˆ†v estimation
47
- - Transfer-time approximations
48
- - Orbit analysis aids and reasoning
49
 
50
- ## Quickstart
51
 
52
- ### vLLM (multimodal)
53
  ```python
54
  from vllm import LLM
55
  from vllm.sampling_params import SamplingParams
@@ -70,7 +64,7 @@ resp = llm.chat(messages, sampling_params=sampling)
70
  print(resp[0].outputs[0].text)
71
  ```
72
 
73
- ### Transformers (text-only demo)
74
  ```python
75
  from transformers import AutoModelForCausalLM, AutoTokenizer
76
  import torch
@@ -85,25 +79,25 @@ out = model.generate(**inputs, max_new_tokens=512, temperature=0.2)
85
  print(tok.decode(out[0], skip_special_tokens=True))
86
  ```
87
 
88
- ## Training Data
89
 
90
- - Dataset: `Taylor658/titan-hohmann-transfer-orbit`
91
- - Modalities: text (explanations), code (snippets), images (orbital diagrams)
92
 
93
- ## Limitations
94
 
95
- - Optimized for Hohmann transfers and related reasoning
96
- - Requires sufficient GPU VRAM for best throughput
97
 
98
- ## Acknowledgements
99
 
100
- - Base model by Mistral AI (Pixtral 12B)
101
- - Dataset by A Taylor
102
 
103
- ### Contact Information
104
 
105
- - **Author**: A Taylor
106
- - **Email**
107
- - **Repository**: https://github.com/ATaylorAerospace/HohmannHET
108
 
109
  ---
 
14
  - image-text-to-text
15
  - orbital-mechanics
16
  - hohmann-transfer-orbits
 
 
 
 
17
  library_name: transformers
18
  pipeline_tag: image-text-to-text
19
+ model_type: pixtral
 
 
20
  ---
21
 
22
+ # πŸš€ Pixtral 12B Fine-Tuned on Titan-Hohmann-Transfer-Orbit
23
 
24
+ > ✨ **Updated** to the latest suitable Mistral multimodal base: `mistralai/Pixtral-12B-Base-2409`.
25
 
26
+ ## 🌟 Overview
27
 
28
+ Fine-tuned variant of **Pixtral 12B** for **orbital mechanics** with emphasis on **Hohmann transfer orbits**. Supports multimodal (image + text) inputs and text outputs.
29
 
30
+ ## πŸ”§ Model Details
31
 
32
+ - **Base**: `mistralai/Pixtral-12B-Base-2409`
33
+ - **Type**: πŸ–ΌοΈ Multimodal (Vision + Text)
34
+ - **Params**: ~12B (decoder) + vision encoder
35
+ - **Languages**: πŸ‡ΊπŸ‡Έ English
36
+ - **License**: πŸ“„ MIT
37
 
38
+ ## 🎯 Intended Use
39
 
40
+ - πŸ›°οΈ Hohmann transfer βˆ†v estimation
41
+ - ⏱️ Transfer-time approximations
42
+ - πŸ” Orbit analysis aids and reasoning
43
 
44
+ ## πŸš€ Quickstart
45
 
46
+ ### 🌐 vLLM (multimodal)
47
  ```python
48
  from vllm import LLM
49
  from vllm.sampling_params import SamplingParams
 
64
  print(resp[0].outputs[0].text)
65
  ```
66
 
67
+ ### πŸ€— Transformers (text-only demo)
68
  ```python
69
  from transformers import AutoModelForCausalLM, AutoTokenizer
70
  import torch
 
79
  print(tok.decode(out[0], skip_special_tokens=True))
80
  ```
81
 
82
+ ## πŸ“Š Training Data
83
 
84
+ - **Dataset**: `Taylor658/titan-hohmann-transfer-orbit`
85
+ - **Modalities**: πŸ“ text (explanations), πŸ’» code (snippets), πŸ–ΌοΈ images (orbital diagrams)
86
 
87
+ ## ⚠️ Limitations
88
 
89
+ - 🎯 Optimized for Hohmann transfers and related reasoning
90
+ - πŸ’Ύ Requires sufficient GPU VRAM for best throughput
91
 
92
+ ## πŸ™ Acknowledgements
93
 
94
+ - **Base model** by Mistral AI (Pixtral 12B)
95
+ - **Dataset** by A Taylor
96
 
97
+ ### πŸ“ž Contact Information
98
 
99
+ - **Author**: πŸ‘¨β€πŸš€ A Taylor
100
+ - **Email**: πŸ“§
101
+ - **Repository**: πŸ”— https://github.com/ATaylorAerospace/HohmannHET
102
 
103
  ---