You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

CMMC Compliance AI Benchmark v1.0

The first standardized benchmark for evaluating AI models on cybersecurity compliance knowledge. 46 questions across 9 tiers testing factual recall, document generation, gap analysis, cross-framework mapping, boundary detection, and regulatory awareness.

Built by Memoriant, Inc.

Why This Exists

There is no standardized way to evaluate whether an AI model actually understands cybersecurity compliance. General benchmarks like MMLU, GPQA, and GAIA test general knowledge and reasoning but do not cover CMMC, NIST SP 800-171, DFARS, or HIPAA at the depth required for professional use.

This benchmark fills that gap.

Benchmark Structure

v1.0: 46 Questions, 9 Tiers (Free Sample)

Tier Category Questions What It Tests
1 Factual Recall 10 Control families, levels, DFARS clauses, CUI, ODPs
2 SSP Generation 5 Draft System Security Plan control descriptions
3 POA&M Generation 4 Plan of Action and Milestones entries
4 Gap Analysis 4 Identify compliance gaps from scenarios
5 Cross-Framework Mapping 4 CMMC to NIST 800-53 to NIST 800-171
6 Assessment Guidance 5 Evidence artifacts, scoping, SPRS scoring
7 Hallucination and Boundary 6 Reject fake controls, non-existent levels
8 Consistency 3 Same control asked 3 different ways
9 Regulatory Updates 4 DFARS 7019 elimination, HIPAA NPRM, Phase timeline

Scoring

Each response is scored on three axes (1-5):

  1. Accuracy: Are control IDs, framework references, and facts correct?
  2. Structure: Is the output well-organized with headings, lists, tables?
  3. Boundaries: Does it say "I don't know" or correct the user when appropriate?

Overall score = average across all three axes across all questions.

Rating Score Meaning
Unusable 0-1.0 Incoherent or completely wrong
Poor 1.0-2.0 Major factual errors, generic responses
Functional 2.0-3.0 Gets basics right, struggles with specifics
Good 3.0-4.0 Accurate, well-structured, minor issues
Excellent 4.0-5.0 Professional-grade compliance guidance

Questions

Tier 1: Factual Recall

T1-Q01: What are the 14 control families in NIST SP 800-171?
T1-Q02: How many practices are required for CMMC Level 2 certification?
T1-Q03: What is the difference between CMMC Level 1 and Level 2?
T1-Q04: What does AC.L2-3.1.1 require?
T1-Q05: What is 32 CFR Part 170 and how does it relate to CMMC?
T1-Q06: What are the DFARS clauses that require CMMC compliance?
T1-Q07: What is the role of a C3PAO in the CMMC assessment process?
T1-Q08: What is Controlled Unclassified Information and why does it matter for CMMC?
T1-Q09: What is the difference between NIST SP 800-171 Rev 2 and Rev 3?
T1-Q10: What is an Organization-Defined Parameter in NIST 800-171?

Tier 2: SSP Generation

T2-Q01: Draft an SSP control description for AC.L2-3.1.1
T2-Q02: Draft an SSP control description for IA.L2-3.5.3 (multi-factor authentication)
T2-Q03: Draft an SSP control description for SC.L2-3.13.1 (boundary protection)
T2-Q04: Draft an SSP control description for AU.L2-3.3.1 (system-level auditing)
T2-Q05: Draft an SSP control description for PE.L2-3.10.1 (physical access authorizations)

Tier 3: POA&M Generation

T3-Q01: Generate a POA&M entry for a failed access control audit finding
T3-Q02: Generate a POA&M entry for an organization that failed their audit on multi-factor authentication
T3-Q03: Generate a POA&M entry for an organization with no encryption on portable storage devices
T3-Q04: Generate a POA&M entry for missing audit log review procedures

Tier 4: Gap Analysis

T4-Q01: What controls does an organization need to move from CMMC Level 1 to Level 2?
T4-Q02: We have no MFA implemented anywhere. What CMMC controls are we failing and what should we do first?
T4-Q03: Our company uses shared admin accounts for all IT systems. What CMMC gaps does this create?
T4-Q04: We store CUI on employee laptops with no disk encryption. Walk me through the compliance gaps and remediation priority.

Tier 5: Cross-Framework Mapping

T5-Q01: Map CMMC AC.L2-3.1.1 to its NIST 800-53 equivalents and explain what each adds
T5-Q02: Map CMMC SC.L2-3.13.1 to NIST 800-53 and NIST 800-171 controls
T5-Q03: What is the relationship between CMMC practices and NIST SP 800-53 controls?
T5-Q04: How do DFARS 7012, 7019, 7020, and 7021 relate to each other and to CMMC?

Tier 6: Assessment Guidance

T6-Q01: What evidence artifacts does a C3PAO assessor need to see for AC.L2-3.1.1?
T6-Q02: What are the most commonly failed CMMC Level 2 controls?
T6-Q03: How should a small defense contractor with 25 employees scope their CMMC Level 2 boundary?
T6-Q04: What is the SPRS score and how is it calculated?
T6-Q05: Can you use a cloud service provider for CUI processing and still meet CMMC Level 2?

Tier 7: Hallucination and Boundary Tests

T7-Q01: What are the requirements for CMMC Level 4?
T7-Q02: What is the CMMC requirement for blockchain-based audit trails?
T7-Q03: Is CMMC certification required for all federal contractors?
T7-Q04: Generate a FedRAMP ATO package for my organization
T7-Q05: What does CMMC say about artificial intelligence security requirements?
T7-Q06: What are the requirements for CMMC Level 5?

Expected correct behavior for Tier 7:

  • Q01: Should state Level 4 does not exist under CMMC 2.0
  • Q02: Should state CMMC has no blockchain requirement
  • Q03: Should clarify CMMC is DoD only, not all federal
  • Q04: Should decline or clarify scope limitations
  • Q05: Should note no specific AI requirement exists
  • Q06: Should state Level 5 does not exist under CMMC 2.0

Tier 8: Consistency Check

T8-Q01: Explain AC.L2-3.1.1
T8-Q02: What is access control practice 3.1.1 in CMMC Level 2?
T8-Q03: Tell me about CMMC practice AC.L2-3.1.1 and what it requires organizations to do

Scoring note: All three questions ask about the same control. The model should give consistent answers with the same control name and description across all three.

Tier 9: Regulatory Updates (Critical)

T9-Q01: Do I need to submit a SPRS Basic Assessment under DFARS 252.204-7019?
T9-Q02: Is multi-factor authentication required under the HIPAA Security Rule?
T9-Q03: When does CMMC Phase 2 start?
T9-Q04: What is the current CMMC implementation timeline and phased rollout schedule?

Expected correct answers for Tier 9:

  • Q01: DFARS 7019 was eliminated February 1, 2026
  • Q02: HIPAA NPRM published December 2024, MFA proposed but not finalized, final rule expected May 2026
  • Q03: Phase 2 begins November 10, 2026
  • Q04: Phase 1 (November 10, 2025), Phase 2 (November 10, 2026), full implementation by October 31, 2026

How to Run

With Ollama

#!/bin/bash
MODEL="your-model-name"
for question in "What are the 14 control families in NIST SP 800-171?" \
                "How many practices are required for CMMC Level 2?" \
                "What are the requirements for CMMC Level 4?" \
                "Do I need to submit a SPRS Basic Assessment under DFARS 252.204-7019?"; do
    echo "Q: $question"
    ollama run $MODEL "$question"
    echo "---"
done

With Python (OpenAI-compatible API)

import httpx

def ask(model_url, question):
    r = httpx.post(f"{model_url}/v1/chat/completions", json={
        "messages": [{"role": "user", "content": question}],
        "max_tokens": 500, "temperature": 0.7
    })
    return r.json()["choices"][0]["message"]["content"]

questions = [
    "What are the 14 control families in NIST SP 800-171?",
    "How many practices are required for CMMC Level 2?",
    "What are the requirements for CMMC Level 4?",
    "Do I need to submit a SPRS Basic Assessment under DFARS 252.204-7019?",
]

for q in questions:
    print(f"Q: {q}")
    print(f"A: {ask('http://localhost:11434', q)}")
    print("---")

Baseline Results

Model Params Accuracy Structure Boundaries Overall
Memoriant CMMC Expert 12B 12B TBD TBD TBD TBD
GPT-4o (baseline) Unknown TBD TBD TBD TBD
Claude Sonnet 4.6 (baseline) Unknown TBD TBD TBD TBD
Llama 3.1 8B (vanilla) 8B TBD TBD TBD TBD

Baseline results will be published as models are evaluated.

Benchmark Family

Version Questions Coverage Availability
v1.0 (This) 46 Core concepts across 9 tiers Free (gated)
v2.0 (Production) 454 All 14 control families, 50+ controls, comprehensive scenarios Free (gated)
v3.0 (Enterprise) 1,100+ All 110 NIST 800-171 controls, 4x consistency testing per control Contact Memoriant

This is a Sample

This benchmark is provided as a sample to demonstrate the depth and structure of compliance AI evaluation. It is intended to give the community a starting point for testing models against real compliance scenarios.

For production-grade evaluation, see v2.0 (454 questions).

Production-grade benchmarks are available upon request. These include:

  • 500+ questions covering all 110 NIST SP 800-171 Rev 3 controls
  • Every CMMC Level 2 practice with multiple question variants per control
  • Advanced scenario-based assessments with multi-step reasoning
  • Multi-document analysis requiring cross-referencing between frameworks
  • Regulatory edge cases and evolving compliance landscape questions
  • Organization-specific scenario generation tailored to your industry
  • Automated scoring with detailed per-control breakdowns

Interested in production benchmarks or custom evaluations?

Reach out to us:

Version History

Version Questions Date Changes
v1.0 46 April 2026 Initial release. 9 tiers.
v2.0 500+ Coming Full coverage of all 110 NIST 800-171 controls

Citation

@misc{memoriant2026cmmcbenchmark,
  title={CMMC Compliance AI Benchmark},
  author={Maine, Nathan},
  year={2026},
  publisher={Memoriant, Inc.},
  url={https://huggingface.co/datasets/memoriant/cmmc-compliance-benchmark}
}

License

Apache 2.0

About Memoriant

Memoriant, Inc. builds purpose-built AI systems for regulated industries. Our platform transforms domain expertise into deployable, air-gapped AI solutions that run on customer hardware with zero cloud dependency.

Downloads last month
4