Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
pfilonenko's picture
Update README.md
b2a24a4 verified
|
raw
history blame
7.81 kB
metadata
license: cc-by-4.0

Machine Learning for Two-Sample Testing under Right-Censored Data: A Simulation Study

About

This dataset is a supplement to the github repositiry and paper addressed to solve the two-sample problem under right-censored observations using Machine Learning. The problem statement can be formualted as H0: S1(t)=S2(t) versus H: S1(t)≠S_2(t) where S1(t) and S2(t) are survival functions of samples X1 and X2.

This dataset contains the synthetic data simulated by the Monte Carlo method and Inverse Transform Sampling.

Repository

The files of this dataset have following structure:

data
├── 1_raw
│   └── two_sample_problem_dataset.tsv.gz
├── 2_samples
│   ├── sample_train.tsv.gz
│   └── sample_simulation.tsv.gz
└── 3_dataset_with_ML_pred
    └── dataset_with_ML_pred.tsv.gz
  • two_sample_problem_dataset.tsv.gz is a raw simulated data. In the github repositiry, this file must be located in the ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/1_raw/
  • sample_train.tsv.gz and sample_simulation.tsv.gz are train and test samples splited from the two_sample_problem_dataset.tsv.gz. In the github repositiry, these files must be located in the ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/2_samples/
  • dataset_with_ML_pred.tsv.gz is the test sample supplemented by the predictions of the proposed ML-methods. In the github repositiry, this file must be located in the ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/3_dataset_with_ML_pred/

Dataset & Samples

In these files there are following fields:

  1. PARAMETERS OF SAMPLE SIMULATION
  • sample is a type of the sample (train, val, test). This field is used to split dataset into train-validate-test samples for ML-model training;
  • H0_H1 is a true hypothesis: if H0, then test statistics were simulated under S1(t)=S2(t); if H1, then test statistics were simulated under S1(t)≠S2(t);
  • Hi is an alternative hypothesis (H01-H09, H11-H19, or H21-H29) for S1(t) and S2(t). Detailed description of these alternatives can be found in the paper;
  • n1 is the size of the sample 1;
  • n2 is the size of the sample 2;
  • perc is a set (expected) censoring rate for the samples 1 and 2;
  • real_perc1 is an actual censoring rate of sample 1;
  • real_perc2 is an actual censoring rate of sample 2;
  1. STATISTICS OF CLASSICAL TWO-SAMPLE TESTS
  • Peto_test is a statistic of the Peto and Peto’s Generalized Wilcoxon test (which is computed on two samples under parameters described above);
  • Gehan_test is a statistic of the Gehan’s Generalized Wilcoxon test;
  • logrank_test is a statistic of the logrank test;
  • CoxMantel_test is a statistic of the Cox-Mantel test;
  • BN_GPH_test is a statistic of the Bagdonavičius-Nikulin test (Generalized PH model);
  • BN_MCE_test is a statistic of the Bagdonavičius-Nikulin test (Multiple Crossing-Effect model);
  • BN_SCE_test is a statistic of the Bagdonavičius-Nikulin test (Single Crossing-Effect model);
  • Q_test is a statistic of the Q-test;
  • MAX_Value_test is a statistic of the Maximum Value test;
  • MIN3_test is a statistic of the MIN3 test;
  • WLg_logrank_test is a statistic of the Weighted Logrank test (weighted function: 'logrank');
  • WLg_TaroneWare_test is a statistic of the Weighted Logrank test (weighted function: 'Tarone-Ware');
  • WLg_Breslow_test is a statistic of the Weighted Logrank test (weighted function: 'Breslow');
  • WLg_PetoPrentice_test is a statistic of the Weighted Logrank test (weighted function: 'Peto-Prentice');
  • WLg_Prentice_test is a statistic of the Weighted Logrank test (weighted function: 'Prentice');
  • WKM_test is a statistic of the Weighted Kaplan-Meier test;
  1. STATISTICS OF THE PROPOSED ML-METHODS FOR TWO-SAMPLE PROBLEM
  • CatBoost_test is a statistic of the proposed ML-method based on the CatBoost framework;
  • XGBoost_test is a statistic of the proposed ML-method based on the XGBoost framework;
  • LightAutoML_test is a statistic of the proposed ML-method based on the LightAutoML (LAMA) framework;
  • SKLEARN_RF_test is a statistic of the proposed ML-method based on Random Forest (implemented in sklearn);
  • SKLEARN_LogReg_test is a statistic of the proposed ML-method based on Logistic Regression (implemented in sklearn);
  • SKLEARN_GB_test is a statistic of the proposed ML-method based on Gradient Boosting Machine (implemented in sklearn).

Dataset Simulation

For this dataset, the full source code (C++) is available here. It makes possible to reproduce and extend the simulation by the Monte Carlo method. Here, we present the file main.cpp only.

#include"simulation_for_machine_learning.h"

// Select two-sample tests
vector<HomogeneityTest*> AllTests()
{
    vector<HomogeneityTest*> D;
    
    // ---- Classical Two-Sample tests for Uncensored Case ----
    //D.push_back( new HT_AndersonDarlingPetitt );
    //D.push_back( new HT_KolmogorovSmirnovTest );
    //D.push_back( new HT_LehmannRosenblatt );
    
    // ---- Two-Sample tests for Right-Censored Case ----
    D.push_back( new HT_Peto );
    D.push_back( new HT_Gehan );
    D.push_back( new HT_Logrank );
    
    D.push_back( new HT_BagdonaviciusNikulinGeneralizedCox );
    D.push_back( new HT_BagdonaviciusNikulinMultiple );
    D.push_back( new HT_BagdonaviciusNikulinSingle );

    D.push_back( new HT_QTest );			//based on the Kaplan-Meier estimator
    D.push_back( new HT_MAX );				//Maximum Value test
    D.push_back( new HT_SynthesisTest );	//MIN3 test
    
    D.push_back( new HT_WeightedLogrank("logrank") );
    D.push_back( new HT_WeightedLogrank("Tarone–Ware") );
    D.push_back( new HT_WeightedLogrank("Breslow") );
    D.push_back( new HT_WeightedLogrank("Peto–Prentice") );
    D.push_back( new HT_WeightedLogrank("Prentice") );
    
    D.push_back( new HT_WeightedKaplanMeyer );
        
    return D;
}

// Example of two-sample testing using this code
void EXAMPLE_1(vector<HomogeneityTest*> &D)
{
    // load the samples
    Sample T1(".//samples//1Chemotherapy.txt");
    Sample T2(".//samples//2Radiotherapy.txt");

    // two-sample testing through selected tests
    for(int j=0; j<D.size(); j++)
    {
        char test_name[512];
        D[j]->TitleTest(test_name);
        

        double Sn = D[j]->CalculateStatistic(T1, T2);
        double pvalue = D[j]->p_value(T1, T2, 27000);  // 27k in accodring to the Kolmogorov's theorem => simulation error MAX||G(S|H0)-Gn(S|H0)|| <= 0.01

        printf("%s\n", &test_name);
        printf("\t Sn: %lf\n", Sn);
        printf("\t pv: %lf\n", pvalue);
        printf("--------------------------------");
    }
}

// Example of the dataset simulation for the proposed ML-method
void EXAMPLE_2(vector<HomogeneityTest*> &D)
{
    // Run dataset (train or test sample) simulation (results in ".//to_machine_learning_2024//")
    simulation_for_machine_learning sm(D);
}

// init point
int main()
{
    // Set the number of threads
    int k = omp_get_max_threads() - 1;
    omp_set_num_threads( k );

    // Select two-sample tests
    auto D = AllTests();
    
    // Example of two-sample testing using this code
    EXAMPLE_1(D);

    // Example of the dataset simulation for the proposed ML-method
    EXAMPLE_2(D);

    // Freeing memory
    ClearMemory(D);
    
    printf("The mission is completed.\n");
    return 0;
}