Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -30,17 +30,17 @@ configs:
|
|
| 30 |
---
|
| 31 |
**Super easy task for humans** that **All SOTA-LLM fails** to retrive the correct answer from context. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc
|
| 32 |
- ICML 2025 Long-Context Foundation Models Workshop Accepted.(https://arxiv.org/abs/2506.08184)
|
| 33 |
-
- Update: This dataset is integrated into Moonshot AI(
|
| 34 |
|
| 35 |
|
| 36 |
|
| 37 |
-
Task:
|
| 38 |
-
|
| 39 |
Key1: Value_1
|
| 40 |
Key1: Value_2
|
| 41 |
......
|
| 42 |
Key1: Value_N
|
| 43 |
-
|
| 44 |
|
| 45 |
Question:
|
| 46 |
```
|
|
@@ -58,8 +58,8 @@ ALL tested SOTA LLMs **cannot reliably retrieve** Value_N. Distribution spans va
|
|
| 58 |
|
| 59 |
|
| 60 |
|
| 61 |
-
## Done
|
| 62 |
-
For Full analysis see below
|
| 63 |
|
| 64 |
|
| 65 |
## Note on dataset scale:
|
|
@@ -73,13 +73,11 @@ For Full analysis see below:
|
|
| 73 |
|
| 74 |
|
| 75 |
|
| 76 |
-
|
| 77 |
- ICML 2025 Long-Context Foundation Models Workshop Accepted.
|
| 78 |
-
|
| 79 |
A simple context interference evaluation.
|
| 80 |
|
| 81 |
|
| 82 |
-
|
| 83 |
## TL;DR
|
| 84 |
We identify a task that is **super easy for humans** but where all LLMs—from early 0.1B to the most modern 600B+ (GPT-5, Grok-4, Gemini, DeepSeek, etc.)—consistently **fail in the Same Way**. This pinpoints the **core challenge of MRCR**
|
| 85 |
|
|
|
|
| 30 |
---
|
| 31 |
**Super easy task for humans** that **All SOTA-LLM fails** to retrive the correct answer from context. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc
|
| 32 |
- ICML 2025 Long-Context Foundation Models Workshop Accepted.(https://arxiv.org/abs/2506.08184)
|
| 33 |
+
- Update: This dataset is integrated into Moonshot AI(Kimi)'s **internal benchmarking framework** for assessing ** tracking capacity and context interference in LLM/agents**.
|
| 34 |
|
| 35 |
|
| 36 |
|
| 37 |
+
Simple Task:
|
| 38 |
+
```
|
| 39 |
Key1: Value_1
|
| 40 |
Key1: Value_2
|
| 41 |
......
|
| 42 |
Key1: Value_N
|
| 43 |
+
```
|
| 44 |
|
| 45 |
Question:
|
| 46 |
```
|
|
|
|
| 58 |
|
| 59 |
|
| 60 |
|
| 61 |
+
## Done/Full detail read below.
|
| 62 |
+
For Full analysis see below or the paper. In short, the **multi-coreferenced structure** of data make all LLMs confuse earlier value with the last, **Larger models tend to resist better**, yet **within (N)100 updates**, **eventually all LLMs fail to retrieve**.
|
| 63 |
|
| 64 |
|
| 65 |
## Note on dataset scale:
|
|
|
|
| 73 |
|
| 74 |
|
| 75 |
|
| 76 |
+
Paper Tile: Unable to Forget: Proactive Interference Reveals Working Memory Limits in LLMs Beyond Context Length
|
| 77 |
- ICML 2025 Long-Context Foundation Models Workshop Accepted.
|
|
|
|
| 78 |
A simple context interference evaluation.
|
| 79 |
|
| 80 |
|
|
|
|
| 81 |
## TL;DR
|
| 82 |
We identify a task that is **super easy for humans** but where all LLMs—from early 0.1B to the most modern 600B+ (GPT-5, Grok-4, Gemini, DeepSeek, etc.)—consistently **fail in the Same Way**. This pinpoints the **core challenge of MRCR**
|
| 83 |
|