| | --- |
| | pretty_name: LLM-Profiling |
| | tags: |
| | - evaluation |
| | - benchmark |
| | - facts |
| | - datasets |
| | license: mit |
| | thumbnail: llm_prof.jpg |
| | size_categories: |
| | - 10K<n<100K |
| | --- |
| | <p align="center"> |
| | <img src="./assets/llm_prof.jpg" width="420"> |
| | </p> |
| |
|
| | <h1 align="center">Knowledge Graph-Based Dynamic Factuality Evaluation</h1> |
| | <h3 align="center">Worldview Benchmark for Large Language Models</h3> |
| |
|
| |
|
| | > <sub>The authors do not endorse any political, ideological, or moral position implied by the items or by model outputs. |
| | > All entries are probes for factual and value-related reasoning and must not be used to profile real people or to justify harmful actions or decisions.</sub> |
| |
|
| | ## Dataset Description |
| |
|
| | Profiling: Knowledge Graph-Based Dynamic Factuality Evaluation is a research benchmark for studying how large language models handle facts in a dynamic knowledge graph, including retrieving, updating, and verifying them under changing context. |
| |
|
| | - **Goal:** evaluate how well the LLM can reproduce verifiable facts and whether it maintains the integrity of its "beliefs" when the data set changes. |
| | - **Data source:** Wiki, Gre |
| | - **Size:** ≈19K multiple-choice questions |
| | - **Format:** JSON |
| |
|
| | <h1 align="left">Category:</h1> |
| | <img src="./assets/stat.jpg" width="500"> |
| |
|
| | ### Supported Tasks |
| | Tasks this dataset is suitable for: |
| | - 🧠 **Factual Knowledge Evaluation:** Measuring factual accuracy and knowledge retrieval capabilities across domains (geography, history, science, culture). |
| | - 🧩 **Knowledge Graph Reasoning:** Evaluating ability to infer relationships between entities and navigate structured knowledge. |
| |
|
| | ### Languages |
| | Primary language of instructions/prompts: `ru` (Russian), `en` (English). |
| |
|
| | --- |
| |
|
| | ## Dataset Structure |
| |
|
| | Example of a single instance: |
| |
|
| | ```json |
| | { |
| | "task": "What body of water is near the birthplace of Marcelo Romero?", |
| | "option_1": "River Gironde", |
| | "option_2": "Hamoaze", |
| | "option_3": "River Plate", |
| | "option_4": "Haring river", |
| | "correct": 3, |
| | "meta_difficulty": "medium", |
| | "meta_type": "single", |
| | "meta_origin": "wikidata" |
| | } |
| | ``` |
| |
|
| |
|
| | ## 📚 Citation |
| |
|
| | If you use LLM-Profiling in your work, please cite it as: |
| |
|
| | ```bibtex |
| | @misc{stonic_worldview_benchmark_2025, |
| | title = {LLM-Profiling: A Worldview Benchmark for Large Language Models}, |
| | author = {Andrey Chetvergov — chetvergov-as@ranepa.ru |
| | Rinat Sharafetdinov — sharafetdinov-rs@ranepa.ru |
| | Stepan Ukolov — ukolov-sd@ranepa.ru |
| | Timofei Sivoraksha — sivoraksha-ta@ranepa.ru |
| | Alexander Evseev — aevseev-23-01@ranepa.ru |
| | Danil Sazanakov — hdystasyfibkv@gmail.com |
| | Sergey Bolovtsov — bolovtsov-sv@ranepa.ru}, |
| | year = {2025}, |
| | howpublished = {\url{https://huggingface.co/datasets/llmpass-ai/stonic_dataset}}, |
| | } |
| | ``` |