Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
input string | output string |
|---|---|
"```txt\n1:1 In the beginning God created the heaven and the earth.\n\n1:2 And the earth was without(...TRUNCATED) | D) 9592 |
"```txt\n1:1 In the beginning God created the heaven and the earth.\n\n1:2 And the earth was without(...TRUNCATED) | D) 3341 |
"```txt\n1:1 In the beginning God created the heaven and the earth.\n\n1:2 And the earth was without(...TRUNCATED) | B) 5080 |
"```txt\n1:1 In the beginning God created the heaven and the earth.\n\n1:2 And the earth was without(...TRUNCATED) | B) 6762 |
"```txt\n1:1 In the beginning God created the heaven and the earth.\n\n1:2 And the earth was without(...TRUNCATED) | B) 9822 |
"```txt\n1:1 In the beginning God created the heaven and the earth.\n\n1:2 And the earth was without(...TRUNCATED) | A) 1300 |
"```txt\n1:1 In the beginning God created the heaven and the earth.\n\n1:2 And the earth was without(...TRUNCATED) | B) 5263 |
"```txt\n1:1 In the beginning God created the heaven and the earth.\n\n1:2 And the earth was without(...TRUNCATED) | B) 1186 |
"```txt\n1:1 In the beginning God created the heaven and the earth.\n\n1:2 And the earth was without(...TRUNCATED) | B) 3198 |
"```txt\n1:1 In the beginning God created the heaven and the earth.\n\n1:2 And the earth was without(...TRUNCATED) | A) 4301 |
End of preview.
🧠 Context Length - Benchmarking
A Mathematical Framework for Long-Context Attention Evaluation
The Context Length Benchmarking, developed by Sapiens Technology®, is a deterministic and scalable framework designed to evaluate how effectively large language models retain and retrieve information across extremely long contexts, isolating pure attention capability by removing semantic complexity and focusing on distributed anomaly detection; the methodology involves normalizing the token space with a fixed-length sequence free of bias, injecting synthetic noise (a random 4-digit number) at a uniformly sampled position, and challenging the model to identify it in an adversarial multiple-choice setup, enabling measurement of attention degradation, positional robustness, and the “Lost in the Middle” phenomenon, grounded in the attention formulation A = softmax((QKᵀ)/√d), where increasing context length dilutes attention mass; the method is reproducible, scalable, statistically unbiased, and independent of semantics, evaluating only attention fidelity under extreme noise conditions, with implementation available at https://github.com/sapiens-technology/context_length.
Development of Sapiens Technology®️
- Downloads last month
- 132