Datasets:
license: cc-by-4.0
Machine Learning for Two-Sample Testing under Right-Censored Data: A Simulation Study
- Petr PHILONENKO, Ph.D. in Computer Science;
- Sergey POSTOVALOV, D.Sc. in Computer Science.
Citing
@misc {petr_philonenko_2024,
author = { {Petr Philonenko} },
title = { ML_for_TwoSampleTesting (Revision a4ae672) },
year = 2024,
url = { https://huggingface.co/datasets/pfilonenko/ML_for_TwoSampleTesting },
doi = { 10.57967/hf/2978 },
publisher = { Hugging Face }
}
About
This dataset is a supplement to the github repositiry and paper addressed to solve the two-sample problem under right-censored observations using Machine Learning. The problem statement can be formualted as H0: S1(t)=S2(t) versus H: S1(t)≠S_2(t) where S1(t) and S2(t) are survival functions of samples X1 and X2.
This dataset contains the synthetic data simulated by the Monte Carlo method and Inverse Transform Sampling.
Repository
The files of this dataset have following structure:
data
├── 1_raw
│ └── two_sample_problem_dataset.tsv.gz (121,986,000 rows)
├── 2_samples
│ ├── sample_train.tsv.gz (24,786,000 rows)
│ └── sample_simulation.tsv.gz (97,200,000 rows)
└── 3_dataset_with_ML_pred
└── dataset_with_ML_pred.tsv.gz (97,200,000 rows)
- two_sample_problem_dataset.tsv.gz is a raw simulated data. In the github repositiry, this file must be located in the ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/1_raw/
- sample_train.tsv.gz and sample_simulation.tsv.gz are train and test samples splited from the two_sample_problem_dataset.tsv.gz. In the github repositiry, these files must be located in the ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/2_samples/
- dataset_with_ML_pred.tsv.gz is the test sample supplemented by the predictions of the proposed ML-methods. In the github repositiry, this file must be located in the ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/3_dataset_with_ML_pred/
Dataset & Samples
In these files, there are following fields:
- PARAMETERS OF SAMPLE SIMULATION
- iter is an iteration number of the Monte Carlo replication (in total, 37650);
- sample is a type of the sample (train, val, test). This field is used to split dataset into train-validate-test samples for ML-model training;
- H0_H1 is a true hypothesis: if H0, then samples X1 and X2 were simulated under S1(t)=S2(t); if H1, then samples X1 and X2 were simulated under S1(t)≠S2(t);
- Hi is an alternative (H01-H09, H11-H19, or H21-H29) with competing hypotheses S1(t) and S2(t). Detailed description of these alternatives can be found in the paper;
- n1 is the size of the sample 1;
- n2 is the size of the sample 2;
- perc is a set (expected) censoring rate for the samples 1 and 2;
- real_perc1 is an actual censoring rate of the sample 1;
- real_perc2 is an actual censoring rate of the sample 2;
- STATISTICS OF CLASSICAL TWO-SAMPLE TESTS
- Peto_test is a statistic of the Peto and Peto’s Generalized Wilcoxon test (which is computed on two samples under parameters described above);
- Gehan_test is a statistic of the Gehan’s Generalized Wilcoxon test;
- logrank_test is a statistic of the logrank test;
- CoxMantel_test is a statistic of the Cox-Mantel test;
- BN_GPH_test is a statistic of the Bagdonavičius-Nikulin test (Generalized PH model);
- BN_MCE_test is a statistic of the Bagdonavičius-Nikulin test (Multiple Crossing-Effect model);
- BN_SCE_test is a statistic of the Bagdonavičius-Nikulin test (Single Crossing-Effect model);
- Q_test is a statistic of the Q-test;
- MAX_Value_test is a statistic of the Maximum Value test;
- MIN3_test is a statistic of the MIN3 test;
- WLg_logrank_test is a statistic of the Weighted Logrank test (weighted function: 'logrank');
- WLg_TaroneWare_test is a statistic of the Weighted Logrank test (weighted function: 'Tarone-Ware');
- WLg_Breslow_test is a statistic of the Weighted Logrank test (weighted function: 'Breslow');
- WLg_PetoPrentice_test is a statistic of the Weighted Logrank test (weighted function: 'Peto-Prentice');
- WLg_Prentice_test is a statistic of the Weighted Logrank test (weighted function: 'Prentice');
- WKM_test is a statistic of the Weighted Kaplan-Meier test;
- STATISTICS OF THE PROPOSED ML-METHODS FOR TWO-SAMPLE PROBLEM
- CatBoost_test is a statistic of the proposed ML-method based on the CatBoost framework;
- XGBoost_test is a statistic of the proposed ML-method based on the XGBoost framework;
- LightAutoML_test is a statistic of the proposed ML-method based on the LightAutoML (LAMA) framework;
- SKLEARN_RF_test is a statistic of the proposed ML-method based on Random Forest (implemented in sklearn);
- SKLEARN_LogReg_test is a statistic of the proposed ML-method based on Logistic Regression (implemented in sklearn);
- SKLEARN_GB_test is a statistic of the proposed ML-method based on Gradient Boosting Machine (implemented in sklearn).
Dataset Simulation
For this dataset, the full source code (C++) is available here. It makes possible to reproduce and extend the simulation by the Monte Carlo method. Here, we present two fragments of the source code (main.cpp and simulation_for_machine_learning.h) which can help to understand the main steps of the simulation process.
main.cpp
#include"simulation_for_machine_learning.h"
// Select two-sample tests
vector<HomogeneityTest*> AllTests()
{
vector<HomogeneityTest*> D;
// ---- Classical Two-Sample tests for Uncensored Case ----
//D.push_back( new HT_AndersonDarlingPetitt );
//D.push_back( new HT_KolmogorovSmirnovTest );
//D.push_back( new HT_LehmannRosenblatt );
// ---- Two-Sample tests for Right-Censored Case ----
D.push_back( new HT_Peto );
D.push_back( new HT_Gehan );
D.push_back( new HT_Logrank );
D.push_back( new HT_BagdonaviciusNikulinGeneralizedCox );
D.push_back( new HT_BagdonaviciusNikulinMultiple );
D.push_back( new HT_BagdonaviciusNikulinSingle );
D.push_back( new HT_QTest ); //based on the Kaplan-Meier estimator
D.push_back( new HT_MAX ); //Maximum Value test
D.push_back( new HT_SynthesisTest ); //MIN3 test
D.push_back( new HT_WeightedLogrank("logrank") );
D.push_back( new HT_WeightedLogrank("Tarone–Ware") );
D.push_back( new HT_WeightedLogrank("Breslow") );
D.push_back( new HT_WeightedLogrank("Peto–Prentice") );
D.push_back( new HT_WeightedLogrank("Prentice") );
D.push_back( new HT_WeightedKaplanMeyer );
return D;
}
// Example of two-sample testing using this code
void EXAMPLE_1(vector<HomogeneityTest*> &D)
{
// load the samples
Sample T1(".//samples//1Chemotherapy.txt");
Sample T2(".//samples//2Radiotherapy.txt");
// two-sample testing through selected tests
for(int j=0; j<D.size(); j++)
{
char test_name[512];
D[j]->TitleTest(test_name);
double Sn = D[j]->CalculateStatistic(T1, T2);
double pvalue = D[j]->p_value(T1, T2, 27000); // 27k in accodring to the Kolmogorov's theorem => simulation error MAX||G(S|H0)-Gn(S|H0)|| <= 0.01
printf("%s\n", &test_name);
printf("\t Sn: %lf\n", Sn);
printf("\t pv: %lf\n", pvalue);
printf("--------------------------------");
}
}
// Example of the dataset simulation for the proposed ML-method
void EXAMPLE_2(vector<HomogeneityTest*> &D)
{
// Run dataset (train or test sample) simulation (results in ".//to_machine_learning_2024//")
simulation_for_machine_learning sm(D);
}
// init point
int main()
{
// Set the number of threads
int k = omp_get_max_threads() - 1;
omp_set_num_threads( k );
// Select two-sample tests
auto D = AllTests();
// Example of two-sample testing using this code
EXAMPLE_1(D);
// Example of the dataset simulation for the proposed ML-method
EXAMPLE_2(D);
// Freeing memory
ClearMemory(D);
printf("The mission is completed.\n");
return 0;
}
simulation_for_machine_learning.h
#ifndef simulation_for_machine_learning_H
#define simulation_for_machine_learning_H
#include"HelpFucntions.h"
// Object of the data simulation for training of the proposed ML-method
class simulation_for_machine_learning{
private:
// p-value computation using the Test and Test Statistic (Sn)
double pvalue(double Sn, HomogeneityTest* Test)
{
auto f = Test->F( Sn );
double pv = 0;
if( Test->TestType().c_str() == "right" )
pv = 1.0 - f;
else
if( Test->TestType().c_str() == "left" )
pv = f;
else // "double"
pv = 2.0*min( f, 1-f );
return pv;
}
// Process of simulation
void Simulation(int iter, vector<HomogeneityTest*> &D, int rank, mt19937boost Gw)
{
// ñôîðìèðîâàëè íàçâàíèå ôàéëà äëÿ ñîõðàíåíèÿ
char file_to_save[512];
sprintf(file_to_save,".//to_machine_learning_2024//to_machine_learning[rank=%d].csv", rank);
// åñëè ýòî ñàìàÿ ïåðâàÿ èòåðàöèÿ, òî ñîõðàíèëè øàïêó ôàéëà
if( iter == 0 )
{
FILE *ou = fopen(file_to_save,"w");
fprintf(ou, "num;H0/H1;model;n1;n2;perc;real_perc1;real_perc2;");
for(int i=0; i<D.size(); i++)
{
char title_of_test[512];
D[i]->TitleTest(title_of_test);
fprintf(ou, "Sn [%s];p-value [%s];", title_of_test, title_of_test);
}
fprintf(ou, "\n");
fclose(ou);
}
// Getting list of the Alternative Hypotheses (H01 - H27)
vector<int> H;
int l = 1;
for(int i=100; i<940; i+=100) // Groups of Alternative Hypotheses (I, II, III, IV, V, VI, VII, VIII, IX)
{
for(int j=10; j<40; j+=10) // Alternative Hypotheses in the Group (e.g., H01, H02, H03 into the I and so on)
//for(int l=1; l<4; l++) // various families of distribution of censoring time F^C(t)
H.push_back( 1000+i+j+l );
}
// Sample sizes
vector<int> sample_sizes;
sample_sizes.push_back( 20 ); // n1 = n2 = 20
sample_sizes.push_back( 30 ); // n1 = n2 = 30
sample_sizes.push_back( 50 ); // n1 = n2 = 50
sample_sizes.push_back( 75 ); // n1 = n2 = 75
sample_sizes.push_back( 100 ); // n1 = n2 = 100
sample_sizes.push_back( 150 ); // n1 = n2 = 150
sample_sizes.push_back( 200 ); // n1 = n2 = 200
sample_sizes.push_back( 300 ); // n1 = n2 = 300
sample_sizes.push_back( 500 ); // n1 = n2 = 500
sample_sizes.push_back( 1000 ); // n1 = n2 = 1000
// Simulation (Getting H, Simulation samples, Computation of the test statistics & Save to file)
for(int i = 0; i<H.size(); i++)
{
int Hyp = H[i];
if(rank == 0)
printf("\tH = %d\n",Hyp);
for(int per = 0; per<51; per+=10)
{
// ---- Getting Hi ----
AlternativeHypotheses H0_1(Hyp,1,0), H0_2(Hyp,2,0);
AlternativeHypotheses H1_1(Hyp,1,per), H1_2(Hyp,2,per);
for(int jj=0; jj<sample_sizes.size(); jj++)
{
int n = sample_sizes[jj];
// ---- Simulation samples ----
//competing hypothesis Í0
Sample A0(*H0_1.D,n,Gw);
Sample B0(*H0_1.D,n,Gw);
if( per > 0 )
{
A0.CensoredTypeThird(*H1_1.D,Gw);
B0.CensoredTypeThird(*H1_1.D,Gw);
}
//competing hypothesis Í1
Sample A1(*H0_1.D,n,Gw);
Sample B1(*H0_2.D,n,Gw);
if( per > 0 )
{
A1.CensoredTypeThird(*H1_1.D,Gw);
B1.CensoredTypeThird(*H1_2.D,Gw);
}
// ---- Computation of the test statistics & Save to file ----
//Sn and p-value computation under Í0
FILE *ou = fopen(file_to_save, "a");
auto perc1 = A0.RealCensoredPercent();
auto perc2 = B0.RealCensoredPercent();
fprintf(ou,"%d;", iter);
fprintf(ou,"H0;");
fprintf(ou,"%d;", Hyp);
fprintf(ou,"%d;%d;", n,n);
fprintf(ou,"%d;%lf;%lf", per, perc1, perc2);
for(int j=0; j<D.size(); j++)
{
auto Sn_H0 = D[j]->CalculateStatistic(A0, B0);
auto pv_H0 = 0.0; // skip computation (it prepares in ML-framework)
fprintf(ou, ";%lf;0", Sn_H0);
}
fprintf(ou, "\n");
//Sn and p-value computation under Í1
perc1 = A1.RealCensoredPercent();
perc2 = B1.RealCensoredPercent();
fprintf(ou,"%d;", iter);
fprintf(ou,"H1;");
fprintf(ou,"%d;", Hyp);
fprintf(ou,"%d;%d;", n,n);
fprintf(ou,"%d;%lf;%lf", per, perc1, perc2);
for(int j=0; j<D.size(); j++)
{
auto Sn_H1 = D[j]->CalculateStatistic(A1, B1);
auto pv_H1 = 0.0; // skip computation (it prepares in ML-framework)
fprintf(ou, ";%lf;0", Sn_H1);
}
fprintf(ou, "\n");
fclose( ou );
}
}
}
}
public:
// Constructor of the class
simulation_for_machine_learning(vector<HomogeneityTest*> &D)
{
int N = 40000; // number of the Monte-Carlo replications
#pragma omp parallel for
for(int k=0; k<N; k++)
{
int rank = omp_get_thread_num();
auto gen = GwMT19937[rank];
if(rank == 0)
printf("\r%d", k);
Simulation(k, D, rank, gen);
}
}
};
#endif