---
pretty_name: LLM-Profiling
tags:
- evaluation
- benchmark
- facts
- datasets
license: mit
thumbnail: llm_prof.jpg
size_categories:
- 10K
### Supported Tasks
Tasks this dataset is suitable for:
- 🧠 **Factual Knowledge Evaluation:** Measuring factual accuracy and knowledge retrieval capabilities across domains (geography, history, science, culture).
- 🧩 **Knowledge Graph Reasoning:** Evaluating ability to infer relationships between entities and navigate structured knowledge.
### Languages
Primary language of instructions/prompts: `ru` (Russian), `en` (English).
---
## Dataset Structure
Example of a single instance:
```json
{
"task": "What body of water is near the birthplace of Marcelo Romero?",
"option_1": "River Gironde",
"option_2": "Hamoaze",
"option_3": "River Plate",
"option_4": "Haring river",
"correct": 3,
"meta_difficulty": "medium",
"meta_type": "single",
"meta_origin": "wikidata"
}
```
## 📚 Citation
If you use LLM-Profiling in your work, please cite it as:
```bibtex
@misc{stonic_worldview_benchmark_2025,
title = {LLM-Profiling: A Worldview Benchmark for Large Language Models},
author = {Andrey Chetvergov — chetvergov-as@ranepa.ru
Rinat Sharafetdinov — sharafetdinov-rs@ranepa.ru
Stepan Ukolov — ukolov-sd@ranepa.ru
Timofei Sivoraksha — sivoraksha-ta@ranepa.ru
Alexander Evseev — aevseev-23-01@ranepa.ru
Danil Sazanakov — hdystasyfibkv@gmail.com
Sergey Bolovtsov — bolovtsov-sv@ranepa.ru},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/llmpass-ai/stonic_dataset}},
}
```