Speaker Identification Dataset for Japanese Version of Romance of the Three Kingdoms
Sample Data
Below is an example of a single entry from the annotated_dialogues dataset. Each entry contains information about a dialogue, including the dialogue_id, the dialogue text (dialogue), its surrounding context (context), the annotated speaker (annotated_speaker), and the quotation type (quotation_type).
{
"dialogue_id": 3,
"dialogue": "おーい",
"context": "黄巾賊\n\n一\n\n後漢の建寧元年のころ。\n今から約千七百八十年ほど前のことである。\n...",
"annotated_speaker": "漁夫",
}
Explanation
- dialogue_id: Unique identifier for the dialogue within the dataset.
- dialogue: Text of the dialogue.
- context: Text surrounding the dialogue to provide narrative context.
- annotated_speaker: A manually annotated label identifies the speaker.
How to Load the Data
You can use Python's json module or libraries like pandas to read and process the dataset. Below is an example of loading and iterating through the dialogues.
import json
# Load a single JSON file
with open("annotated_dialogues/052410_annotated_dialogues.json", "r", encoding="utf-8") as f:
data = json.load(f)
# Example: Accessing dialogues
for dialogue in data["dialogue_data"]:
print(f"Dialogue ID: {dialogue['dialogue_id']}")
print(f"Speaker: {dialogue['annotated_speaker']}")
print(f"Dialogue: {dialogue['dialogue']}")
print(f"Context: {dialogue['context']}\n")
Output Example
Dialogue ID: 3
Speaker: 漁夫
Dialogue: おーい
Context: 黄巾賊 一 後漢の建寧元年のころ。今から約千七百八十年ほど前のことである。...
This example demonstrates how to parse and process the annotated dialogues for further analysis or model training.
Dataset Description
This dataset consists of annotations for dialogues and characters, primarily aimed at facilitating narrative analysis and speaker identification tasks. The data is organized into multiple folders, each serving a specific purpose:
Folders and Files
annotated_dialogues
Contains JSON files with human-annotated speaker labels and their surrounding contextual data.- Example:
052410_annotated_dialogues.json,052411_annotated_dialogues.json, etc.
- Example:
annotated_dialogues_with_synonyms
Extends the annotated dialogues by including synonym candidates for the annotated speaker labels. These synonyms represent alternate names or references for the same speaker.- Example:
052410_annotated_dialogues_with_synonyms.json,052411_annotated_dialogues_with_synonyms.json, etc.
- Example:
annotated_speakers
Contains a list of annotated characters for eachbook_id. This folder includes several variations of the data with additional synonym information and enhancements.- Example:
052410_annotated_speakers.jsonall_annotated_speakers.jsonall_annotated_speakers_synonyms.jsonall_annotated_speakers_synonyms_human_added.jsonall_annotated_speakers_synonyms_with_wikiredirect.jsonall_annotated_speakers_with_candidates.json
- Example:
Detailed Descriptions
annotated_dialogues
These files include speaker estimation labels annotated manually, along with the surrounding dialogue context for each speaker.annotated_dialogues_with_synonyms
This set builds uponannotated_dialogues, providing synonym candidates for the manually annotated speaker labels. The synonyms help identify alternate references for the same individual in dialogues.annotated_speakers
This folder includes various JSON files summarizing annotated speakers and their synonyms:all_annotated_speakers.json: A comprehensive list of all annotated characters.all_annotated_speakers_synonyms.json: Adds multiple alternate names or references for each character.all_annotated_speakers_synonyms_human_added.json: Human-curated corrections and additions to the synonym list.all_annotated_speakers_synonyms_with_wikiredirect.json: Synonyms detected using Wikipedia’s redirect functionality.all_annotated_speakers_with_candidates.json: A dictionary of synonym candidates for every character in the dataset.
Tokenizer Considerations for Context Extraction
Note: The context field in this dataset has been tokenized using the Llama-2 Tokenizer. Due to this, some garbled data may appear in the context.
If you wish to resolve these issues while keeping the provided dialogue and annotated_speaker fields unchanged, you can consider the following solutions:
- Changing the Tokenizer: Use an alternative tokenizer for extracting the
contextthat better preserves the original text. - Character-based Extraction: Extract the
contexton a character basis rather than relying on tokenization, which may help in mitigating the garbled characters.
License
This dataset is released under the CC BY 4.0 license. You are free to share, adapt, and build upon this work as long as proper attribution is provided.
For more details, refer to the Creative Commons Attribution 4.0 International License.
Citation
If you use this dataset in your research, please cite it as follows:
- Author: Seiji Gobara
- Title: Speaker Identification Dataset for Japanese Version of Romance of the Three Kingdoms
- Date: December 2024