shards list | version int64 |
|---|---|
[
{
"column_encodings": [
"uint16"
],
"column_names": [
"input_ids"
],
"column_sizes": [
null
],
"compression": null,
"format": "mds",
"hashes": [
"sha1",
"xxh64"
],
"raw_data": {
"basename": "shard.00000.mds",
"bytes": 67100800,
... | 2 |
Mosaic format for filtered dedup text dataset to train Malaysian LLM
This repository is to store dataset shards using mosaic format.
- prepared at https://github.com/malaysia-ai/dedup-text-dataset/blob/main/pretrain-llm/combine-dedup-text-dataset-filtered-4096.ipynb
- using tokenizer https://huggingface.co/malaysia-ai/bpe-tokenizer
- 4096 context length.
how-to
- git clone,
git lfs clone https://huggingface.co/datasets/malaysia-ai/mosaic-dedup-text-dataset
- load it,
from streaming import LocalDataset
import numpy as np
from streaming.base.format.mds.encodings import Encoding, _encodings
class UInt16(Encoding):
def encode(self, obj) -> bytes:
return obj.tobytes()
def decode(self, data: bytes):
return np.frombuffer(data, np.uint16)
_encodings['uint16'] = UInt16
dataset = LocalDataset('mosaic-dedup-text-dataset-filtered')
len(dataset)
- Downloads last month
- 22