Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
json
Languages:
English
Size:
10K - 100K
License:
File size: 871 Bytes
f5a909b 93bb50b 8ab40d3 f6be9f9 8ab40d3 f6be9f9 8ab40d3 f6be9f9 8ab40d3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
license: mit
task_categories:
- question-answering
language:
- en
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
The dataset aims to describe the entity that we can found in wikda.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Jean petit
- **Language(s) (NLP):** This dataset it can be use to do NLP task like question-ansewering, semantic annotation, entity generation
- **License:** MIT
## Uses
### Direct Use
This dataset have used to fine tune LLM for semantic annotation task
[More Information Needed]
## Dataset Structure
The train dataset contains 03 colums:
* Label : the name of the wikidata entity
* entity: the wikidata uri of the given entity
* description: Description of the entity, it can be empty
## Dataset Creation
|