Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Indic-Bert-NER-BIO-Dataset
A comprehensive Named Entity Recognition dataset in BIO format for training NER models on medical and regulatory documents in Indian languages.
Dataset Overview
This dataset contains annotated medical and pharmaceutical regulatory documents with entity labels in BIO (Begin-Inside-Outside) format. It is designed for training Named Entity Recognition models, particularly the Indic-Bert-NER-Model.
Dataset Statistics
| Attribute | Value |
|---|---|
| Format | BIO (Begin-Inside-Outside) and JSONL |
| Total Files | 7 files (mixed formats) |
| Languages | Hindi, English, Indian languages |
| Domain | Medical, Pharmaceutical, Regulatory |
| Data Sources | CTRI, FAERS, JSL, Phase 2 Augmented |
Data Sources
1. CTRI (Clinical Trials Registry - India)
- File:
ctri_bio.jsonl(1.1 MB) - Format: JSONL
- Description: Annotated clinical trial documents from the Indian Clinical Trials Registry
- Content: Drug names, diseases, dosages, routes of administration
2. FAERS (FDA Adverse Event Reporting System)
- File:
faers_bio.jsonl(8.3 MB) - Format: JSONL
- Description: Adverse event reports translated and adapted for Indian pharmaceutical context
- Content: Adverse events, drug names, patient characteristics, outcomes
3. JSL (John Snow Labs Medical Dataset)
- File:
jsl_bio.jsonl(410 KB) - Format: JSONL
- Description: High-quality medical NER annotations from John Snow Labs
- Content: Medical conditions, treatments, medications, procedures
4. Phase 2 Training Data (Augmented)
- Files:
phase2_training.bio(15.2 MB)phase2_training_augmented.bio(19.9 MB)
- Format: BIO plain text
- Description: Primary training dataset with augmented samples for robustness
- Content: Comprehensive medical and regulatory entity annotations
5. Phase 2 Merged Dataset
- File:
phase2_merged.bio(18.7 MB) - Format: BIO plain text
- Description: Consolidated dataset combining multiple sources
- Content: De-duplicated and merged annotations
6. Phase 2 Final Augmented
- File:
phase2_final_augmented.bio(24.4 MB) - Format: BIO plain text
- Description: Final augmented dataset with additional synthetic data
- Content: Expanded training set with improved coverage
Entity Schema
Annotated Entity Types
The dataset includes annotations for the following entity categories:
| Entity Type | Examples | Use Cases |
|---|---|---|
| Drug/Medication | Paracetamol, Aspirin, Ibuprofen | Drug identification in documents |
| Disease/Condition | Fever, Hypertension, Diabetes | Medical condition extraction |
| Dosage | 500mg, 2 tablets, 1 injection | Medication instructions |
| Route of Administration | Oral, Intravenous, Subcutaneous | Administration method extraction |
| Adverse Event | Headache, Rash, Nausea | Safety monitoring |
| Medical Procedure | Surgery, X-ray, Blood test | Procedure identification |
| Body Part | Heart, Liver, Kidney | Anatomical references |
| Regulatory Reference | Schedule H, Form F | Regulatory compliance |
BIO Format Explanation
The BIO format works as follows:
- B-LABEL: Beginning of an entity
- I-LABEL: Continuation/Inside of an entity
- O: Outside any entity (not part of an entity)
Example:
Paracetamol B-Drug
is O
used O
for O
headache B-Disease
treatment O
. O
File Format Details
BIO Format (.bio files)
Plain text with one token per line, entity tags on the same line separated by whitespace:
token1 B-Entity
token2 I-Entity
token3 O
JSONL Format (.jsonl files)
JSON Lines with structured records:
{
"text": "Paracetamol is used for fever treatment.",
"entities": [
{"start": 0, "end": 11, "label": "Drug"},
{"start": 24, "end": 29, "label": "Disease"}
],
"tokens": ["Paracetamol", "is", "used", "for", "fever", "treatment", "."],
"tags": ["B-Drug", "O", "O", "O", "B-Disease", "O", "O"]
}
Data Splits
Training Data
- Primary:
phase2_training.bio(15.2 MB) - ~45,000 sentences - Augmented:
phase2_training_augmented.bio(19.9 MB) - ~60,000 sentences - Final:
phase2_final_augmented.bio(24.4 MB) - ~70,000 sentences
Validation & Test Data
Embedded in source files (CTRI, FAERS, JSL datasets):
- CTRI: ~500 annotated documents
- FAERS: ~2,000 annotated documents
- JSL: ~300 annotated documents
Data Preparation
Preprocessing Steps
- Tokenization: Word-level tokenization for BIO format
- Annotation: Manual and semi-automatic annotation with quality review
- Augmentation: Synthetic data generation for low-frequency entities
- Merging: Consolidation of multiple sources with de-duplication
- Validation: Quality assurance through inter-annotator agreement checks
Known Issues & Limitations
- Some documents contain mixed language text (Hindi + English)
- Entity boundaries may vary between annotators
- Domain-specific terminology variations
- Limited coverage of rare entity types
Usage
Loading with Hugging Face Datasets
from datasets import load_dataset
# Load entire dataset
dataset = load_dataset("sharkdodo/Indic-Bert-NER-BIO-Dataset")
# Load specific splits
dataset = load_dataset(
"sharkdodo/Indic-Bert-NER-BIO-Dataset",
split="train"
)
# Access individual records
for example in dataset.take(5):
print(example)
Processing BIO Files
def load_bio_file(filepath):
"""Load BIO format file"""
sentences = []
tags = []
current_sent = []
current_tags = []
with open(filepath, 'r', encoding='utf-8') as f:
for line in f:
line = line.strip()
if not line: # Empty line = sentence boundary
if current_sent:
sentences.append(current_sent)
tags.append(current_tags)
current_sent = []
current_tags = []
else:
parts = line.split('\t')
if len(parts) == 2:
token, tag = parts
current_sent.append(token)
current_tags.append(tag)
return sentences, tags
# Usage
sentences, tags = load_bio_file('phase2_training.bio')
print(f"Loaded {len(sentences)} sentences")
Processing JSONL Files
import json
def load_jsonl_file(filepath):
"""Load JSONL format file"""
data = []
with open(filepath, 'r', encoding='utf-8') as f:
for line in f:
record = json.loads(line)
data.append(record)
return data
# Usage
data = load_jsonl_file('ctri_bio.jsonl')
print(f"Loaded {len(data)} records")
for record in data[:3]:
print(record)
Training with Transformers
from transformers import AutoTokenizer, AutoModelForTokenClassification, Trainer, TrainingArguments
from datasets import load_dataset
# Load dataset
dataset = load_dataset("sharkdodo/Indic-Bert-NER-BIO-Dataset")
# Tokenizer and model
model_name = "ai4bharat/indic-bert"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name, num_labels=num_labels)
# Preprocessing function
def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(
examples['tokens'],
truncation=True,
is_split_into_words=True
)
labels = []
for i, label in enumerate(examples['tags']):
word_ids = tokenized_inputs.word_ids(batch_index=i)
label_ids = []
for word_idx in word_ids:
if word_idx is None:
label_ids.append(-100)
else:
label_ids.append(label[word_idx])
labels.append(label_ids)
tokenized_inputs['labels'] = labels
return tokenized_inputs
# Tokenize dataset
tokenized_dataset = dataset.map(tokenize_and_align_labels, batched=True)
# Training arguments
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
save_steps=500,
save_total_limit=2,
evaluation_strategy="steps"
)
# Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset['train'],
eval_dataset=tokenized_dataset['validation']
)
trainer.train()
Annotation Guidelines
Quality Standards
- Inter-annotator agreement (IAA) > 0.85 (Cohen's Kappa)
- Double annotation for high-priority documents
- Quality review by domain experts
- Entity boundary agreement protocols
Entity Annotation Rules
- Drug Names: Include all forms (generic, brand, abbreviations)
- Diseases: Include conditions and symptoms
- Dosages: Include numeric value + unit
- Routes: Standardized terminology
- Adverse Events: Clinical terms + lay terms
License
This dataset is released under the MIT License.
MIT License
Copyright (c) 2026 Vivek Molleti
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
Privacy & Ethics
Data Privacy
- All personally identifiable information (PII) has been de-identified
- Patient confidentiality protected through anonymization
- Compliance with HIPAA and DPDP guidelines
- No sensitive personal data in the public dataset
Ethical Use
- For medical and regulatory purposes only
- Respect patient privacy at all times
- Do not use for unauthorized commercial purposes
- Support transparency in healthcare AI
Citation
If you use this dataset, please cite:
@dataset{indic_bert_ner_bio_dataset_2026,
title = {Indic-Bert-NER-BIO-Dataset},
author = {Vivek Molleti},
year = {2026},
url = {https://huggingface.co/datasets/sharkdodo/Indic-Bert-NER-BIO-Dataset},
note = {Medical and regulatory NER dataset in BIO format for Indian languages}
}
Related Resources
- Trained Model: Indic-Bert-NER-Model
- Base Model: AI4Bharat Indic-BERT
Changelog
Version 1.0 (April 2026)
- Initial release
- 7 dataset files with ~100,000+ sentences
- BIO and JSONL formats
- Multi-source aggregation with quality control
Data Collection Timeline
- Phase 1: CTRI and FAERS data collection (Q1 2025)
- Phase 2: Dataset augmentation and validation (Q3 2025)
- Phase 3: Public release (Q2 2026)
- Downloads last month
- 79