C-MORAL Mistral GRPO Adapters

This repository contains LoRA adapters released for C-MORAL:

C-MORAL: Controllable Multi-Objective Molecular Optimization with Reinforcement Alignment for LLMs

These adapters are trained on top of:

  • mistralai/Mistral-7B-Instruct-v0.3

using:

  • GRPO

for controllable multi-objective molecular optimization.

Available Task Subfolders

Each task is stored as a separate subfolder in this Hugging Face repository.

  • abmp: amp+bbbp+mutag+plogp
  • acep: amp+carc+herg+plogp
  • bcmq: bbbp+carc+mutag+qed
  • bdeq: bbbp+drd2+herg+qed
  • bdpq: bbbp+drd2+qed+plogp
  • bpq: bbbp+plogp+qed
  • cde: carc+drd2+herg
  • dhmq: drd2+hia+mutag+qed
  • elq: herg+liv+qed
  • hlmpq: hia+liv+mutag+plogp+qed

Usage

Load a task-specific adapter with PEFT by setting subfolder to the desired task name.

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model_id = "mistralai/Mistral-7B-Instruct-v0.3"
adapter_repo = "Rwigle/C-MORAL-Mistral-GRPO"
task_subfolder = "bpq"  # change to abmp / elq / hlmpq / ...

tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(base_model_id)
model = PeftModel.from_pretrained(model, adapter_repo, subfolder=task_subfolder)

Method

  • Base model: mistralai/Mistral-7B-Instruct-v0.3
  • Adapter type: LoRA
  • Training algorithm: GRPO
  • Domain: multi-objective molecular optimization

Project

  • GitHub: https://github.com/Rwigie/C-MORAL

Citation

If you use these adapters, please cite:

C-MORAL: Controllable Multi-Objective Molecular Optimization with
Reinforcement Alignment for LLMs
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Rwigle/C-MORAL-Mistral-GRPO

Adapter
(937)
this model