metadata
dataset_info:
features:
- name: filename
dtype: string
- name: chunks
dtype: string
- name: repo_name
dtype: string
splits:
- name: train
num_bytes: 111941
num_examples: 251
download_size: 37308
dataset_size: 111941
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset info
This dataset contains documentation chunks from repositories (ADD REPOS).
Postprocessing
After some inspection, some chunks contain text too short to be meaningful, so we decided to remove those by removing chunks whose number of tokens (computed with the same tokenizer of the model to be used for the embeddings) is lower or equal to the 5%:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("BAAI/bge-base-en-v1.5")
df = ds.to_pandas()
df["token_length"] = df["chunks"].apply(lambda x: len(tokenizer.encode(x)))
df_short = df[df["token_length"] >= df["token_length"].quantile(0.05)]
ds = Dataset.from_pandas(df_short[["filename", "chunks", "repo_name"]], preserve_index=False)