sapienstech's picture
Add files using upload-large-folder tool
74291ae verified
---
title: Context Length Benchmarking
emoji: 🧠
colorFrom: indigo
colorTo: blue
sdk: static
pinned: false
---
# 🧠 Context Length - Benchmarking
**A Mathematical Framework for Long-Context Attention Evaluation**
---
<div align="justify" style="font-size: 1.05em;">
The <strong><a href="https://huggingface.co/buckets/sapiens-technology/context_length_benchmarking/resolve/context_length_benchmarking.zip?download=true">Context Length Benchmarking</a></strong>, developed by Sapiens Technology®, is a deterministic and scalable framework designed to evaluate how effectively large language models retain and retrieve information across extremely long contexts, isolating pure attention capability by removing semantic complexity and focusing on distributed anomaly detection; the methodology involves normalizing the token space with a fixed-length sequence free of bias, injecting synthetic noise (a random 4-digit number) at a uniformly sampled position, and challenging the model to identify it in an adversarial multiple-choice setup, enabling measurement of attention degradation, positional robustness, and the “Lost in the Middle” phenomenon, grounded in the attention formulation <mark style="background-color: #e0e0e0;">A = softmax((QKᵀ)/√d)</mark>, where increasing context length dilutes attention mass; the method is reproducible, scalable, statistically unbiased, and independent of semantics, evaluating only attention fidelity under extreme noise conditions, with implementation available at <a href="https://github.com/sapiens-technology/context_length">https://github.com/sapiens-technology/context_length</a>.
</div>
---
<div align="right">
<sub>Development of Sapiens Technology®️</sub>
</div>