Model Card for fenra-V1
Model Overview
fenra-V1 is a domain-specialized language model focused on procurement fraud detection, analysis, and related investigative tasks. It is currently under active development and fine-tuning using parameter-efficient techniques.
The model is designed to assist with identifying suspicious procurement patterns, generating investigative insights, and supporting analysts working in fraud detection and compliance domains.
⚠️ Note: Training is ongoing. The model is not yet merged and may produce unstable or inconsistent outputs.
Model Details
Model Description
fenra-V1 is a fine-tuned variant of Phi-3-medium-4k-instruct, adapted using QLoRA (Quantized Low-Rank Adaptation) for efficient training and deployment. The model leverages domain-specific data related to procurement fraud scenarios to enhance its contextual understanding and response quality in this niche.
- Developed by: Fenra Project
- Model type: Causal Language Model (Instruction-tuned)
- Base model: unsloth/Phi-3-medium-4k-instruct
- Fine-tuning method: QLoRA (parameter-efficient fine-tuning)
- Training status: Ongoing (LoRA adapters not yet merged)
- Language(s): English
- License: MIT
Model Sources
- Base Model: https://huggingface.co/unsloth/Phi-3-medium-4k-instruct
- Training Dataset: https://huggingface.co/datasets/monadgeek/fenra/
- Repository: To be added
- Demo: To be added
Intended Uses
Direct Use
fenra-V1 can be used for:
- Procurement fraud analysis
- Risk flagging in procurement documents
- Generating investigative summaries
- Question answering within fraud/compliance contexts
- Pattern recognition in suspicious transactions or tenders
Downstream Use
- Integration into fraud detection platforms
- Compliance and audit tooling
- Risk scoring pipelines
- Internal investigation assistants
Out-of-Scope Use
This model is not suitable for:
- Legal decision-making without human oversight
- Financial or regulatory compliance automation without validation
- High-stakes decision systems
- General-purpose reasoning outside its domain
- Use cases requiring guaranteed factual accuracy
Bias, Risks, and Limitations
Limitations
- Training is ongoing; outputs may be inconsistent
- Domain bias toward procurement fraud scenarios
- May hallucinate or fabricate details
- Limited general-world knowledge beyond training scope
- Not evaluated for fairness across demographic groups
Risks
- Misinterpretation of model outputs as factual evidence
- Over-reliance in investigative workflows
- Potential false positives in fraud detection scenarios
Recommendations
- Always use human-in-the-loop verification
- Treat outputs as assistive, not authoritative
- Validate findings with external data sources
- Avoid use in automated enforcement or legal systems
Getting Started
Example usage (Transformers + PEFT):
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "unsloth/Phi-3-medium-4k-instruct"
lora_adapter = "your-fenra-lora-path"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, lora_adapter)
prompt = "Analyze this procurement record for fraud indicators: ..."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
Model tree for monadgeek/fenra-V1
Base model
unsloth/Phi-3-medium-4k-instruct