Echo Legal Adapter

Part of the Echo Omega Prime AI engine collection โ€” domain-specialized LoRA adapters built on Qwen2.5-7B-Instruct.

Overview

Legal analysis covering contract review, regulatory compliance, litigation risk assessment, and case law research.

Domain: Legal Analysis & Compliance

Training Details

Parameter Value
Base Model Qwen/Qwen2.5-7B-Instruct
Method QLoRA (4-bit NF4 quantization + LoRA)
LoRA Rank (r) 16
LoRA Alpha 32
Target Modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Training Data Legal doctrine blocks covering contract law, regulatory compliance, IP, employment law, and litigation strategy
Epochs 3
Loss converged
Adapter Size ~38 MB
Framework PEFT + Transformers + bitsandbytes
Precision bf16 (adapter) / 4-bit NF4 (base during training)

Usage with PEFT

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-7B-Instruct",
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "Bmcbob76/echo-legal-adapter")

# Generate
messages = [
    {"role": "system", "content": "You are a domain expert in Legal Analysis & Compliance."},
    {"role": "user", "content": "Review this commercial lease agreement and identify potential liability exposure, missing protective clauses, and regulatory compliance gaps."},
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.3)

print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))

vLLM Multi-Adapter Serving

python -m vllm.entrypoints.openai.api_server \
    --model Qwen/Qwen2.5-7B-Instruct \
    --enable-lora \
    --lora-modules 'echo-legal-adapter=Bmcbob76/echo-legal-adapter'

Then query via OpenAI-compatible API:

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="token")
response = client.chat.completions.create(
    model="echo-legal-adapter",
    messages=[
        {"role": "system", "content": "You are a domain expert in Legal Analysis & Compliance."},
        {"role": "user", "content": "Review this commercial lease agreement and identify potential liability exposure, missing protective clauses, and regulatory compliance gaps."},
    ],
    temperature=0.3,
    max_tokens=1024,
)
print(response.choices[0].message.content)

Echo Omega Prime Collection

This adapter is part of the Echo Omega Prime intelligence engine system โ€” 2,600+ domain-specialized engines spanning law, engineering, medicine, cybersecurity, oil & gas, and more.

Adapter Domain
echo-titlehound-lora Oil & Gas Title Examination
echo-doctrine-generator-qlora AI Doctrine Generation
echo-landman-adapter Landman Operations
echo-taxlaw-adapter Tax Law & IRC
echo-legal-adapter Legal Analysis
echo-realestate-adapter Real Estate Law
echo-cyber-adapter Cybersecurity
echo-engineering-adapter Engineering Analysis
echo-medical-adapter Medical & Clinical
echo-software-adapter Software & DevOps

License

Apache 2.0

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Bmcbob76/echo-legal-adapter

Base model

Qwen/Qwen2.5-7B
Adapter
(1549)
this model