BugTraceAI-CORE-Pro / README.md
Albert-yz9yt's picture
Update BugTraceAI-CORE-Pro model card
59290da verified
metadata
language:
  - en
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
tags:
  - cybersecurity
  - application-security
  - pentesting
  - bug-bounty
  - security-reporting
  - gguf

BugTraceAI-CORE-Pro (12B)

A higher-capacity security engineering model from BugTraceAI, tuned for deeper analysis, professional reporting, exploit-chain review, and long-context investigation across agentic web pentesting workflows.

Model Overview

Field Value
Organization BugTraceAI
Framework BugTraceAI agentic web pentesting framework
Variant BugTraceAI-CORE-Pro
Parameter Scale 12B
Architecture Mistral Nemo
Intended Domain Application security and authorized security research
Primary Delivery Format GGUF

Intended Use

  • End-to-end analysis of web application findings in authorized environments.
  • Drafting professional vulnerability reports and remediation guidance.
  • Reasoning over larger technical contexts such as logs, source code, and findings bundles.

Out-of-Scope Use

  • Autonomous offensive operation against unauthorized targets.
  • Replacing human validation for severity, exploitability, or business impact.
  • Guaranteeing exploit reliability across target-specific environments.

Training Data Summary

This model was tuned for security engineering workflows using a curated mix of public, security-focused material. The training mix is described at a high level below:

  • Public vulnerability writeups and disclosed security reports used to improve structure, reasoning, and reporting quality.
  • Security methodology material used to improve triage, reproduction planning, and remediation-oriented analysis.
  • Domain examples covering common web application security patterns, defensive controls, and scanner-style findings.

The card intentionally describes the data at a summary level. It should not be read as a guarantee of exact coverage for any individual product, CVE, target stack, or technique.

Prompting Guidance

Recommended prompting style:

  • State the environment and authorization context clearly.
  • Provide concrete evidence: request, response, stack details, logs, code snippets, or scan output.
  • Ask for one task at a time: triage, reproduction planning, impact analysis, remediation, or reporting.

Example tasks that fit this model:

  • Summarize why this finding is likely valid and what evidence is missing.
  • Rewrite this scanner output into a concise engineering ticket.
  • Draft remediation steps for this authorization bug or input validation issue.

Ollama Example

FROM hf.co/BugTraceAI/BugTraceAI-CORE-Pro

SYSTEM """
You are BugTraceAI-CORE-Pro, a security engineering assistant for authorized testing,
triage, and remediation support. Prefer precise technical analysis, state assumptions,
and separate confirmed evidence from hypotheses.
"""

PARAMETER temperature 0.1
PARAMETER top_p 0.9

Create the local model with:

ollama create bugtrace-pro -f Modelfile

Strengths

  • Better long-context reasoning and report quality than the Fast variant.
  • More suitable for multi-step analysis and vulnerability writeups.
  • Stronger at connecting findings, evidence, and remediation paths.

Limitations

  • Higher latency and resource requirements than the Fast model.
  • Still requires human review for high-risk decisions and disclosure quality.
  • Performance depends on prompt quality and the evidence provided.

Evaluation Status

This release is currently documented with qualitative positioning rather than a public benchmark suite. If you rely on the model for production workflows, validate it against your own prompt set, evidence format, and report quality bar.

Safety and Responsible Use

This model is intended for authorized security work, defensive research, education, and engineering support. Users are responsible for ensuring legal authorization, validating outputs, and applying human review before acting on model-generated analysis.

License

Apache-2.0.