Instructions to use TDAC/traclm-v4-7b-instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TDAC/traclm-v4-7b-instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="TDAC/traclm-v4-7b-instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TDAC/traclm-v4-7b-instruct") model = AutoModelForCausalLM.from_pretrained("TDAC/traclm-v4-7b-instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use TDAC/traclm-v4-7b-instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "TDAC/traclm-v4-7b-instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TDAC/traclm-v4-7b-instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/TDAC/traclm-v4-7b-instruct
- SGLang
How to use TDAC/traclm-v4-7b-instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "TDAC/traclm-v4-7b-instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TDAC/traclm-v4-7b-instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "TDAC/traclm-v4-7b-instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TDAC/traclm-v4-7b-instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use TDAC/traclm-v4-7b-instruct with Docker Model Runner:
docker model run hf.co/TDAC/traclm-v4-7b-instruct
See axolotl config
axolotl version: 0.8.0.dev0
base_model: /hf_downloads/models/Qwen/Qwen2.5-7B # container
# base_model: /usr/local/share/hf_downloads/meta-llama/Llama-3.2-1B-Instruct # local
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true
strict: false
seed: 28
datasets:
# path to dataset
- path: /workspace/axolotl/project/data/datasets/traclm-data-v4/traclm-v4-slimorca.jsonl
type: chat_template
chat_template: tokenizer_default
split: train
field_messages: conversations
message_property_mappings:
role: from
content: value
train_on_eos: "turn" # mask all EOS tokens except assistant (if dataset has only single-turn convos, this could also be set to "last")
special_tokens:
pad_token: <|pad_token|>
eos_token: <|im_end|>
# now deprecated
#message_field_role: from
#message_field_content: value
dataset_prepared_path: /workspace/axolotl/project/axolotl_stuff/data_prepared/last_run_prepared # container
# dataset_prepared_path: /home/danielruiz/workspace/llm_lab/traclm/axolotl_stuff/data_prepared/last_run_prepared # local
val_set_size: 0.05
# list of datasets for eval (must comment out val_set_size) <-- NOT WORKING YET
# test_datasets:
# - path: /workspace/data/eval.jsonl
# ds_type: json
# # You need to specify a split. For "json" datasets the default split is called "train".
# split: train
# type: completion
# data_files:
# - /workspace/data/eval.jsonl
output_dir: /workspace/axolotl/project/axolotl_stuff/output/last_run # container
# output_dir: /home/danielruiz/workspace/llm_lab/traclm/axolotl_stuff/output/last_run # local
sequence_len: 4096 # qwen2.5 instruct max context length = 32768
sample_packing: true
# sample_packing_eff_est:
# total_num_tokens:
eval_sample_packing: true
pad_to_sequence_len: true
# have to log in via cli first
wandb_mode: # "offline" to save run metadata locally and not sync to the server, "disabled" to turn off wandb
wandb_project: traclm-v4-7b-instruct # wandb project name
wandb_entity: nps-trac-mtry # wandb team name if using a team
wandb_name: # name of your wandb run, can keep blank
wandb_run_id: # ID of your wandb run, can keep blank
wandb_log_model: # "checkpoint" to log model every `save_steps`, "end" to log only at the end of training, keep blank to prevent sending model to wandb
train_on_inputs: false
group_by_length: false
bf16: auto
fp16: false
tf32: false
# use gradient checkpointing when you are having OOM issues (slows training by ~20%)
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5 # loss value indicating the learning has broken down (a good estimate is 2x the loss at the start of training)
loss_watchdog_patience: 2 # num of high-loss steps in a row before the trainer aborts (default: 3)
save_safetensors: true
gradient_accumulation_steps: 16
micro_batch_size: 4
eval_batch_size: 4
optimizer: adamw_torch_fused
# lr estimation equation: ~[(base model LR) * sqrt((.5 x seq_len x num_gpu x micro_batch_size) / base model tokens per batch)]
# (7*10^-7) * √((.5*4096*8*64)/(2048*32768)) = ~8e-8
lr_scheduler: cosine #constant_with_warmup #constant #linear
learning_rate: 8e-6
# warmup_steps: 5 #50
warmup_ratio: .05
num_epochs: 3
eval_strategy: steps #"no" #"epoch"
eval_steps:
evals_per_epoch: 3
# eval_table_size:
save_steps:
saves_per_epoch: 1
save_strategy: "epoch" #"no" #"best"
#save_total_limit: 3
#max_steps: 10
debug:
deepspeed: /workspace/axolotl/project/axolotl_stuff/deepspeed/zero3.json
weight_decay: 0.1 # match qwen sft finetuning, 0.0 by default
fsdp:
# - full_shard
# - auto_wrap
fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: true
# fsdp_offload_params: true
# fsdp_use_orig_params: false
# fsdp_cpu_ram_efficient_loading: true
# # activation_checkpointing: true # not sure if this works, but should be enabled when using fsdp in place of gradient_checkpointing above (only when gradient checkpointing required)
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
# fsdp_state_dict_type: FULL_STATE_DICT
# fsdp_sharding_strategy: FULL_SHARD
# fsdp_backward_prefetch: BACKWARD_PRE
Model Description
TRACLM is a Qwen2.5-7B large language model (LLM) fine-tuned for the U.S. Army.
Background
Proprietary and open-source LLMs lack sufficient knowledge of Army terminology to deliver maximum value, especially in analytic applications where unstructured text must be processed into actionable information. TRACLM seeks to fine-tune a performance open-source, permissively-licensed LLM on unclassified Army corpora, injecting domain-specific knowledge for downstream application. The effort demonstrates a low-cost, repeatable path for creating domain-specific generative artificial intelligence (AI) capabilities within Department of Defense (DoD) networks.
Intended uses & limitations
More information needed
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 28
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- total_eval_batch_size: 32
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 26
- num_epochs: 3.0
Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
Citations
TRACLM
@article{ruiz2024finetuning,
title={Fine-Tuning and Evaluating Open-Source Large Language Models for the Army Domain},
author={Ruiz, Daniel C. and Sell, John},
journal={arXiv preprint arXiv:2410.20297},
year={2024},
doi={10.48550/arXiv.2410.20297},
url={https://arxiv.org/abs/2410.20297}
}
- Downloads last month
- -
Model tree for TDAC/traclm-v4-7b-instruct
Base model
Qwen/Qwen2.5-7B