Instructions to use TRAC-MTRY/traclm-v3-7b-instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TRAC-MTRY/traclm-v3-7b-instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="TRAC-MTRY/traclm-v3-7b-instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TRAC-MTRY/traclm-v3-7b-instruct") model = AutoModelForCausalLM.from_pretrained("TRAC-MTRY/traclm-v3-7b-instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use TRAC-MTRY/traclm-v3-7b-instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "TRAC-MTRY/traclm-v3-7b-instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TRAC-MTRY/traclm-v3-7b-instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/TRAC-MTRY/traclm-v3-7b-instruct
- SGLang
How to use TRAC-MTRY/traclm-v3-7b-instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "TRAC-MTRY/traclm-v3-7b-instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TRAC-MTRY/traclm-v3-7b-instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "TRAC-MTRY/traclm-v3-7b-instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TRAC-MTRY/traclm-v3-7b-instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use TRAC-MTRY/traclm-v3-7b-instruct with Docker Model Runner:
docker model run hf.co/TRAC-MTRY/traclm-v3-7b-instruct
Model Card for traclm-v3-7b-instruct
An Army domain finetune of mistralai/Mistral-7B-v0.1 created by finetuning on a merger of domain-specific and general-purpose instruction-tuning datasets.
Model Details
Model Description
This model is a research project aimed at exploring whether a pretrained LLM can acquire tangible domain-specific knowledge about the Army domain.
- Developed by: The Research and Analysis Center, Army Futures Command, U.S. Army
- License: MIT
- Model Type: MistralForCausalLM
- Finetuned from model: mistralai/Mistral-7B-v0.1
Available Quantizations (for running on low-resource hardware):
Downstream Use
This model is instruction-tuned, and is thus more capable of following user instructions than its corresponding base version. However, this model is still capable of extreme hallucination, so all outputs should be verified by end users.
Out-of-Scope Use
The creation of this model constitutes academic research in partnership with the Naval Postgraduate School. The purpose of this research is to inform future DoD experimentation regarding the development and application of domain-specific large language models. Experiments involving direct application of this model to downstream military tasks are encouraged, but extreme caution should be exercised before productionalization.
Prompt Format
This model was fine-tuned with the chatml prompt format. It is highly recommended that you use the same format for any interactions with the model. Failure to do so will degrade performance significantly.
ChatML Format:
<|im_start|>system
Provide some context and/or instructions to the model.
<|im_end|>
<|im_start|>user
The user’s message goes here
<|im_end|>
<|im_start|>assistant
The ChatML format can easily be applied to text you plan to process with the model using the chat_template included in the tokenizer. Read here for additional information.
Training Details
Training Data
This model was trained on a shuffled merger of the following datasets:
- General Purpose Instruction Tuning: Open-Orca/SlimOrca-Dedup
- Domain Specific Instruction Tuning: TRAC-MTRY/traclm-v3-data (TBP)
Training Procedure
The model was trained using Open Access AI Collective's Axolotl framework and Microsoft's DeepSpeed framework for model/data parallelism.
Training Hardware
Training was conducted on a single compute node 4x NVIDIA A100 GPUs.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 28
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 19
- num_epochs: 3
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.5726 | 0.0 | 1 | 1.6102 |
| 1.2333 | 0.2 | 510 | 1.2477 |
| 1.1485 | 0.4 | 1020 | 1.2010 |
| 1.106 | 0.6 | 1530 | 1.1687 |
| 1.1772 | 0.8 | 2040 | 1.1419 |
| 1.1567 | 1.0 | 2550 | 1.1190 |
| 1.0359 | 1.19 | 3060 | 1.1130 |
| 0.945 | 1.39 | 3570 | 1.0977 |
| 0.9365 | 1.59 | 4080 | 1.0831 |
| 0.9334 | 1.79 | 4590 | 1.0721 |
| 0.8913 | 1.99 | 5100 | 1.0627 |
| 0.804 | 2.18 | 5610 | 1.0922 |
| 0.7892 | 2.38 | 6120 | 1.0888 |
| 0.7757 | 2.58 | 6630 | 1.0873 |
| 0.7797 | 2.78 | 7140 | 1.0864 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
Model Card Contact
MAJ Daniel C. Ruiz (daniel.ruiz@nps.edu)
- Downloads last month
- -