LexaLCM_Datasets / README.md
Lexa
Readme
cbb5402
metadata
pretty_name: LexaLCM Datasets

LexaLCM Datasets

This repository contains the datasets used for training the LexaLCM model. Datasets contain at least the following columns that are expected by the LexaLCM model:

  • text_sentences: The text of the document.
  • text_sentences_sonar_emb: The sonar embedding of the text, which is a list of 1024-dimensional vectors.

Datasets

Requirements

  • Python 3.10
  • UV (https://docs.astral.sh/uv/)... if you haven't tried it yet, you should! UV is a modern Python package manager that is faster and more secure than pip.

Usage

Stochastically split the dataset into train and val (if needed)

If you want to add additional datasets, but continue to use the same train and val split, you can use the following script.

uv run src/Scripts/Split_TrainVal.py

where:

  • -n is the name of the dataset
  • -d is the path to the directory with the dataset
  • -s is the split ratio for the dataset

For example:

uv run src/Scripts/Split_TrainVal.py -n Wikipedia_Ja -d ./src/Some/Other/Path -s 0.15

Verify the embeddings

uv run src/Scripts/VerifyEmbeddings.py

where:

  • -d is the path to the directory with the dataset

For example:

uv run src/Scripts/VerifyEmbeddings.py -d ./src/Datasets/Wikipedia_Ja/Train

Visualize the dataset

uv run src/Scripts/VisualizeDataset.py

where:

  • -d is the path to the directory with the dataset
  • -s is the flag to use a sample of the dataset for faster processing (10% of the dataset)
  • -b is the batch size for the dataset

For example:

uv run src/Scripts/VisualizeDataset.py -d ./src/Datasets/Wikipedia_Ja/Train