Cognitive QG - LLaMA 2 7B Chat QLoRA
QLoRA fine-tuned LLaMA 2 7B Chat for structured cognitive analysis and Socratic question generation based on Facione's Critical Thinking Framework.
Model Details
- Base Model: meta-llama/Llama-2-7b-chat-hf
- Method: QLoRA (4-bit NF4 quantization + LoRA adapters)
- LoRA Config: r=16, alpha=32, dropout=0.1, target=all linear layers
- Training: 30 annotated arguments, 15 epochs, lr=1e-4, cosine schedule
- Framework: Facione's Critical Thinking (Interpretation, Analysis, Inference, Evaluation, Explanation, Self-Regulation)
Task
Given an argument text, the model produces a structured analysis through 6 cognitive phases and generates up to 3 Socratic questions that probe reasoning weaknesses.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-chat-hf",
quantization_config=bnb_config,
device_map="auto",
)
model = PeftModel.from_pretrained(base_model, "Pothong/cognitive-qg-llama2-lora")
model.eval()
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
Output Format
The model outputs structured text with sections:
- INTERPRETATION (stance, knowledge domain)
- ANALYSIS (core claim, premise, reasoning type)
- INFERENCE (consequences, alternatives)
- EVALUATION (strength, credibility, fallacies)
- EXPLANATION (reasoning structure, justification)
- SELF-REGULATION (bias detection, revision)
- SOCRATIC QUESTIONS (up to 3)
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Pothong/cognitive-qg-llama2-lora
Base model
meta-llama/Llama-2-7b-chat-hf