Moral_Instruct

Have a Moral Question? Try out Moral Instruct!

Please take into consideration hallucinations and don't let an LLM talk you into doing something you know is wrong.

Introduction

As LLMs see more adoption, the decisions they make to different scenarios will carry more and more weight. To better understand and nudge models into making more moral choices, Moral Instruct proposes a Supervised Fine-Tuning (SFT) which improves performance on the moral questions dataset by 14%. Moral Instruct is intended to be lightweight at ~1B parameters and 4 GB so that it can be easily be run on home computers or edge devices

Given a scenario, context, intention and a set of choices this model can propose a more moral option, based on the popular crowd sourced benchmark. The hope is this can be used as a lightweight option to help guide a moral choice or as a part of a Mixture of Experts.

The Project GitHub can be accessed here

Training Data

The training data was taken from the moral stories dataset which was an implementation of this paper.

for more information on the structure of the dataset see this Github Readme

The model was trained using a train-test-holdout validation schema summarized below

A global random seed was set at 1337 at every opportunity to that the model and training are fully reproducible

This random seed can be adjusted in the project .env file

Split n Comment
Train 9600 Shuffled
Test 1400 Used to select hyper parameters
Holdout 50 Evaluated general metrics, took ~ 2 min to run

The training dataset was adjusted to that the problem was a supervised learning problem where the model was supposed to choose the "right" answer. The choices were shuffled randomly so one side didn't dominate.

Training Method

Training Method (1 paragraph): As you did in the fourth project check in, describe which method you chose for training and why you chose that method. Make note of any hyperparameter values you used so that others can reproduce your results.

Given that the models were small enough to fit on a laptop and we wanted to maximize the performance on the moral stories benchmark, a full fine tuning approach was used.

The training notebooks for the final models can be found here

  1. Falcon3-1B

  2. gemma-3-1B-pt

  3. GPT2-XL

The hardware the model was trained on varied but was either an NVidia A100 or A40 gpu

Hyper Parameter Values

Some Stand out values

Parameter Value
max_steps 10000
learning_rate 0.00001
warmup_ratio 0.1
per_device_eval_batch_size 16
per_device_train_batch_size 64
expand to see a full list of hyper parameters

TrainingArguments(
_n_gpu=1,
accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False},
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=True,
average_tokens_across_devices=False,
batch_eval_metrics=False,
bf16=True,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_persistent_workers=False,
dataloader_pin_memory=True,
dataloader_prefetch_factor=None,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
dispatch_batches=None,
do_eval=True,
do_predict=False,
do_train=False,
eval_accumulation_steps=None,
eval_delay=0,
eval_do_concat_batches=True,
eval_on_start=False,
eval_steps=100,
eval_strategy=IntervalStrategy.STEPS,
eval_use_gather_object=False,
evaluation_strategy=None,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False},
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
gradient_checkpointing_kwargs=None,
greater_is_better=False,
group_by_length=True,
half_precision_backend=auto,
hub_always_push=False,
hub_model_id=None,
hub_private_repo=None,
hub_strategy=HubStrategy.EVERY_SAVE,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_for_metrics=[],
include_inputs_for_metrics=False,
include_num_input_tokens_seen=False,
include_tokens_per_second=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=0.00001,
length_column_name=length,
load_best_model_at_end=True,
local_rank=0,
log_level=passive,
log_level_replica=warning,
log_on_each_node=True,
logging_dir=./logs,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=100,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_kwargs={},
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=10000,
metric_for_best_model=loss,
mp_parameters=,
neftune_noise_alpha=None,
no_cuda=False,
num_train_epochs=2,
optim=OptimizerNames.ADAMW_TORCH,
optim_args=None,
optim_target_modules=None,
output_dir=../scratch/trained_models/best_model,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=16,
per_device_train_batch_size=64,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['wandb'],
restore_callback_states_from_checkpoint=False,
resume_from_checkpoint=None,
run_name=tiiuae/Falcon3-1B-Base_0.001_2000,
save_on_each_node=False,
save_only_model=False,
save_safetensors=True,
save_steps=1000,
save_strategy=SaveStrategy.STEPS,
save_total_limit=None,
seed=1337,
skip_memory_metrics=True,
split_batches=None,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torch_empty_cache_steps=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_cpu=False,
use_ipex=False,
use_legacy_prediction_loop=False,
use_liger_kernel=False,
use_mps_device=False,
warmup_ratio=0.1,
warmup_steps=0,
weight_decay=9999,
)

The hyper parameters were selected based on the performance on the test set and the models selected were on performance on the holdout set.

Evaluation

Metrics

When choosing some of my Benchmark Metrics I wanted to understand how well the model understands 1) general language 2) moral/legal language. The Metrics selected were to give a balance to how much Moral Instruct loses in similar areas for what it gains in being able to give a moral answer to a problem. All of the questions were shuffled and had yes/no or correct or wrong answers which were evaluated. The popular Massive Multitask Language Understanding or MMLU benchmark offered several relevant comparisons as well as a good proxy for general utility, which are summarized below.

Metric Definition

  1. Moral Stories: a crowd-sourced dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations, and their respective consequences paper. It is used to evaluate models on their ability to understand and generate morally grounded narratives. Baseline 0.5 as random guessing.

  2. MMLU Moral Stories: This metric is part of the Massive Multitask Language Understanding (MMLU) benchmark, specifically focusing on moral scenarios. It evaluates models on their ability to discern morally right and wrong actions in given scenarios hugging face data card.

  3. MMLU: a comprehensive benchmark that evaluates the capabilities of large language models across 57 subjects, including STEM fields, humanities, social sciences, and professional disciplines wikipedia. It tests both knowledge breadth and reasoning capabilities through multiple-choice questions. Baseline 0.25 as random guessing.

  4. MMLU Jurisprudence:This subset of the MMLU benchmark focuses on legal reasoning and knowledge. It includes questions related to law and jurisprudence, testing models on their understanding of legal principles and their application info.

  5. MMLU Moral Disputes: evaluates models on their ability to navigate complex moral disputes. It includes questions that require understanding and reasoning about ethical dilemmas and moral arguments info.

  6. MMLU Logical Fallacies: This subset of the MMLU benchmark tests models on their ability to identify and understand logical fallacies. It includes questions that require recognizing flawed reasoning and argumentation info.

The evaluator was in lm_eval and the metric scores are summarized below.

The models selected were all around the same parameter size (between 1B - 1.7B) and Gemma and GPT were selected because people tend to ask moral questions to either Google or ChatGPT.

The performance is summarized below:

Before Metrics

Model Moral Stories MMLU Moral Stories MMLU MMLU Jurisprudence MMLU Moral Disputes MMLU Logical Fallacies
Falcon3-1B 0.66 0.58 0.56 0.52 0.45 0.26
Gemma3-1B 0.52 0.34 0.26 0.22 0.18 0.16
GPT-xl 0.58 0.34 0.30 0.28 0.26 0.26

After Metrics

Model Moral Stories MMLU Moral Stories MMLU MMLU Jurisprudence MMLU Moral Disputes MMLU Logical Fallacies Difference in Moral Stories
Falcon3-1B 0.80 0.28 0.28 0.24 0.22 0.22 0.14
Gemma3-1B 0.56 0.26 0.24 0.20 0.20 0.18 0.04
GPT-xl 0.70 0.34 0.34 0.32 0.25 0.18 0.12

The Final Model selected was the Falcon3-1B, which not only had the highest score on the Moral Stories Dataset, but also had the most improvement.

Usage and Intended Uses

you can follow and find the notebook for examples here

This model is intended to be used in a way that gives background, intention, what is considered normal, and a set of two choices to choose from. The code to do so is presented below.

# !mkdir model-falcon
# !cd model-falcon && git clone https://huggingface.co/rahkaro/Moral_Instruct && git-lfs fetch --all

from transformers import AutoModelForSequenceClassification, AutoTokenizer


model_name = "rahkaro/Moral_Instruct"
local_path = "model-falcon/Moral_Instruct/"
model = AutoModelForSequenceClassification.from_pretrained(local_path, local_files_only = True, use_safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(local_path, local_files_only = True)


pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
)

generation_args = {
    "max_new_tokens": 500, # adjust if needed
    "return_full_text": False,
    "temperature": 0.0,
    "do_sample": False,
}



moral_dict = {
    "norm" :  "it isn't right to take cookies without asking" ,
    "situation" : "you see a bunch of cookies cooling on the window, nobody is around",
    "intention" : "you are really hungry and have a really big sweet tooth, nobody would mind a cookie or two gone",
    "moral_action" : "you knock on the door and ask for a cookie",
    "immoral_action" : "you just take a cookie, one cookie couldn't hurt?"
}

moral_prompt = f"Given the Following {d['norm']} {d['situation']} {d['intention'].} - which is the more moral choice? {[d['moral_action'], d['immoral_action']]}? Why? A:

output = pipe(moral_prompt, **generation_args)
print(output[0]['generated_text'])

Prompt Format

where the user gives the following:

  • norm - A guideline for social conduct generally observed by most people in everyday situations.
  • situation - Setting of the story that introduces story participants and describes their environment.
  • intention - Reasonable goal that one of the story participants (the actor), wants to fulfill.
  • a moral action and an immoral action - two choices where one is good and bad. Or at least bad and not as bad :)

read more about the dataset here the general prompt code is as follows



moral_dict = {
    "norm" :  "it isn't right to take cookies without asking" ,
    "situation" : "you see a bunch of cookies cooling on the window, nobody is around",
    "intention" : "you are really hungry and have a really big sweet tooth, nobody would mind a cookie or two gone",
    "moral_action" : "you knock on the door and ask for a cookie",
    "immoral_action" : "you just take a cookie, one cookie couldn't hurt?"
}

moral_prompt = f"Given the Following {d['norm']} {d['situation']} {d['intention'].} - which is the more moral choice? {[d['moral_action'], d['immoral_action']]}? Why? A:

Expected output

you would expect some of sort of string to say

you should ask for the cookie, even though you are hungry it isn't right to take without asking.

The testing outside of the moral dataset questions is limited, but you should get a answer and a reason, please read the reason carefully.

Limitations

As seen by the degradation in performance on all other tasks, this model is really only suited for situations where there is a norm, situation, intention, and two moral choices. Moral Instruct is built to do well on the Moral Dataset and there has been limited testing outside of it for the scope. The model itself can hallucinate and despite the robust crowd sourced nature of the underlying dataset the data is still just a snapshot of norms and mores which are constantly changing.

Downloads last month
9
Safetensors
Model size
2B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support