File size: 1,733 Bytes
cb2bdbe
 
22c974e
 
 
 
 
 
 
 
cb2bdbe
22c974e
 
0603cea
22c974e
0603cea
22c974e
0603cea
22c974e
fac3a39
 
22c974e
 
 
 
 
 
20bde09
0603cea
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: mit
task_categories:
- text-generation
- summarization
language:
- ml
pretty_name: Malayalam dataset
size_categories:
- 1M<n<10M
---

# About
This is a large dataset that contains a lot of  malayalam text. The dataset is created by combining many other malayalam datasets, filtering and cleaning them.

The dataset is ideal to pretrain (or maybe even fine tune) a Large Language Model on Malayalam language.

This is the second version of the dataset where I've added some more data and removed a lot of data, which were very short(less than 75 characters). Having a lot off shorter sentences will significantly lower data quality. If you need the older dataset, which contained more than 6 million lines of malayalam text(most of them are super short), see this particular kaggle version: https://www.kaggle.com/datasets/arjungravi/ultimate-malayalam-dataset/versions/3  

If you want more high quality data on malayalam(presumably for pretraining LLMs), I would recommend mixing this with [malayalam-sangraha](https://huggingface.co/datasets/Arjun-G-Ravi/malayalam-sangraha) dataset(which is more than 32GB of raw malayalam data).

# Credits
- https://www.kaggle.com/datasets/disisbig/malyalam-news-dataset
- https://www.kaggle.com/datasets/parvmodi/english-to-malayalam-machine-translation-dataset?select=train.ml
- https://www.kaggle.com/datasets/akhisreelibra/malayalam-news
- https://www.kaggle.com/datasets/vigneshvit/malayalam-news-dataset
- https://huggingface.co/datasets/rajeshradhakrishnan/malayalam_wiki
- https://huggingface.co/datasets/Sakshamrzt/IndicNLP-Malayalam
- https://www.kaggle.com/datasets/disisbig/malayalam-wikipedia-articles
- https://www.kaggle.com/datasets/vigneshvit/malayalam-news-dataset