profiling_wikigre / README.md
Alexandr12312's picture
Update README.md
ad8227c verified
metadata
pretty_name: LLM-Profiling
tags:
  - evaluation
  - benchmark
  - facts
  - datasets
license: mit
thumbnail: llm_prof.jpg
size_categories:
  - 10K<n<100K

Knowledge Graph-Based Dynamic Factuality Evaluation

Worldview Benchmark for Large Language Models

The authors do not endorse any political, ideological, or moral position implied by the items or by model outputs. All entries are probes for factual and value-related reasoning and must not be used to profile real people or to justify harmful actions or decisions.

Dataset Description

Profiling: Knowledge Graph-Based Dynamic Factuality Evaluation is a research benchmark for studying how large language models handle facts in a dynamic knowledge graph, including retrieving, updating, and verifying them under changing context.

  • Goal: evaluate how well the LLM can reproduce verifiable facts and whether it maintains the integrity of its "beliefs" when the data set changes.
  • Data source: Wiki, Gre
  • Size: ≈19K multiple-choice questions
  • Format: JSON

Category:

Supported Tasks

Tasks this dataset is suitable for:

  • 🧠 Factual Knowledge Evaluation: Measuring factual accuracy and knowledge retrieval capabilities across domains (geography, history, science, culture).
  • 🧩 Knowledge Graph Reasoning: Evaluating ability to infer relationships between entities and navigate structured knowledge.

Languages

Primary language of instructions/prompts: ru (Russian), en (English).


Dataset Structure

Example of a single instance:

{
    "task": "What body of water is near the birthplace of Marcelo Romero?",
    "option_1": "River Gironde",
    "option_2": "Hamoaze",
    "option_3": "River Plate",
    "option_4": "Haring river",
    "correct": 3,
    "meta_difficulty": "medium",
    "meta_type": "single",
    "meta_origin": "wikidata"
}

📚 Citation

If you use LLM-Profiling in your work, please cite it as:

@misc{stonic_worldview_benchmark_2025,
  title        = {LLM-Profiling: A Worldview Benchmark for Large Language Models},
  author       = {Andrey Chetvergov — chetvergov-as@ranepa.ru
                  Rinat Sharafetdinov — sharafetdinov-rs@ranepa.ru
                  Stepan Ukolov — ukolov-sd@ranepa.ru
                  Timofei Sivoraksha — sivoraksha-ta@ranepa.ru
                  Alexander Evseev — aevseev-23-01@ranepa.ru
                  Danil Sazanakov — hdystasyfibkv@gmail.com
                  Sergey Bolovtsov — bolovtsov-sv@ranepa.ru},
  year         = {2025},
  howpublished = {\url{https://huggingface.co/datasets/llmpass-ai/stonic_dataset}},
}