Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
File size: 7,808 Bytes
fc39c19
 
 
 
f4d0f9a
3d31546
 
fc39c19
45073ec
 
55a9c47
45073ec
403d951
 
24757dc
4a6bcd0
2f60543
5c6b9be
437765d
 
 
 
 
 
 
 
5c6b9be
437765d
2f60543
 
 
a0237ff
2d4021f
4810dba
fff5cb1
6dd2b41
610e591
517730d
 
6d7efdb
 
 
a0237ff
 
fff5cb1
6dd2b41
c7f79ad
2f60543
 
 
04eb347
 
 
 
 
 
 
 
 
 
 
 
fff5cb1
6dd2b41
a3599e1
6dd2b41
a3599e1
6dd2b41
 
c2b1a72
6d7efdb
9fe0d69
6d7efdb
7725c5c
 
9d6d049
999c517
9d6d049
 
7725c5c
9d6d049
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b2a24a4
9d6d049
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
999c517
6d7efdb
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
---
license: cc-by-4.0
---

# Machine Learning for Two-Sample Testing under Right-Censored Data: A Simulation Study
- [Petr PHILONENKO](https://orcid.org/0000-0002-6295-4470), Ph.D. in Computer Science;
- [Sergey POSTOVALOV](https://orcid.org/0000-0003-3718-1936), D.Sc. in Computer Science.

# About
This dataset is a supplement to the [github repositiry](https://github.com/pfilonenko/ML_for_TwoSampleTesting) and paper addressed to solve the two-sample problem under right-censored observations using Machine Learning. 
The problem statement can be formualted as H0: S1(t)=S2(t) versus H: S1(t)≠S_2(t) where S1(t) and S2(t) are survival functions of samples X1 and X2.

This dataset contains the synthetic data simulated by the Monte Carlo method and Inverse Transform Sampling.

# Repository

The files of this dataset have following structure:
~~~
data
├── 1_raw
│   └── two_sample_problem_dataset.tsv.gz
├── 2_samples
│   ├── sample_train.tsv.gz
│   └── sample_simulation.tsv.gz
└── 3_dataset_with_ML_pred
    └── dataset_with_ML_pred.tsv.gz
~~~

- **two_sample_problem_dataset.tsv.gz** is a raw simulated data. In the [github repositiry](https://github.com/pfilonenko/ML_for_TwoSampleTesting), this file must be located in the _ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/1_raw/_
- **sample_train.tsv.gz** and **sample_simulation.tsv.gz** are train and test samples splited from the **two_sample_problem_dataset.tsv.gz**. In the [github repositiry](https://github.com/pfilonenko/ML_for_TwoSampleTesting), these files must be located in the _ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/2_samples/_
- **dataset_with_ML_pred.tsv.gz** is the test sample supplemented by the predictions of the proposed ML-methods. In the [github repositiry](https://github.com/pfilonenko/ML_for_TwoSampleTesting), this file must be located in the _ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/3_dataset_with_ML_pred/_

# Dataset & Samples
In these files there are following fields:

1) PARAMETERS OF SAMPLE SIMULATION
- **sample** is a type of the sample (train, val, test). This field is used to split dataset into train-validate-test samples for ML-model training;
- **H0_H1** is a true hypothesis: if **H0**, then test statistics were simulated under S1(t)=S2(t); if **H1**, then test statistics were simulated under S1(t)≠S2(t);
- **Hi** is an alternative hypothesis (H01-H09, H11-H19, or H21-H29) for S1(t) and S2(t). Detailed description of these alternatives can be found in the paper;
- **n1** is the size of the sample 1;
- **n2** is the size of the sample 2;
- **perc** is a set (expected) censoring rate for the samples 1 and 2;
- **real_perc1** is an actual censoring rate of sample 1;
- **real_perc2** is an actual censoring rate of sample 2;

2) STATISTICS OF CLASSICAL TWO-SAMPLE TESTS
- **Peto_test** is a statistic of the Peto and Peto’s Generalized Wilcoxon test (which is computed on two samples under parameters described above);
- **Gehan_test** is a statistic of the Gehan’s Generalized Wilcoxon test;
- **logrank_test** is a statistic of the logrank test;
- **CoxMantel_test** is a statistic of the Cox-Mantel test;
- **BN_GPH_test** is a statistic of the Bagdonavičius-Nikulin test (Generalized PH model);
- **BN_MCE_test** is a statistic of the Bagdonavičius-Nikulin test (Multiple Crossing-Effect model);
- **BN_SCE_test** is a statistic of the Bagdonavičius-Nikulin test (Single Crossing-Effect model);
- **Q_test** is a statistic of the Q-test;
- **MAX_Value_test** is a statistic of the Maximum Value test;
- **MIN3_test** is a statistic of the MIN3 test;
- **WLg_logrank_test** is a statistic of the Weighted Logrank test (weighted function: 'logrank');
- **WLg_TaroneWare_test** is a statistic of the Weighted Logrank test (weighted function: 'Tarone-Ware');
- **WLg_Breslow_test** is a statistic of the Weighted Logrank test (weighted function: 'Breslow');
- **WLg_PetoPrentice_test** is a statistic of the Weighted Logrank test (weighted function: 'Peto-Prentice');
- **WLg_Prentice_test** is a statistic of the Weighted Logrank test (weighted function: 'Prentice');
- **WKM_test** is a statistic of the Weighted Kaplan-Meier test;

3) STATISTICS OF THE PROPOSED ML-METHODS FOR TWO-SAMPLE PROBLEM
- **CatBoost_test** is a statistic of the proposed ML-method based on the CatBoost framework;
- **XGBoost_test** is a statistic of the proposed ML-method based on the XGBoost framework;
- **LightAutoML_test** is a statistic of the proposed ML-method based on the LightAutoML (LAMA) framework;
- **SKLEARN_RF_test** is a statistic of the proposed ML-method based on Random Forest (implemented in sklearn);
- **SKLEARN_LogReg_test** is a statistic of the proposed ML-method based on Logistic Regression (implemented in sklearn);
- **SKLEARN_GB_test** is a statistic of the proposed ML-method based on Gradient Boosting Machine (implemented in sklearn).

# Dataset Simulation

For this dataset, the full source code (C++) is available [here](https://github.com/pfilonenko/ML_for_TwoSampleTesting/tree/main/dataset/simulation). 
It makes possible to reproduce and extend the simulation by the Monte Carlo method. Here, we present the file main.cpp only.

```C++
#include"simulation_for_machine_learning.h"

// Select two-sample tests
vector<HomogeneityTest*> AllTests()
{
	vector<HomogeneityTest*> D;
	
	// ---- Classical Two-Sample tests for Uncensored Case ----
	//D.push_back( new HT_AndersonDarlingPetitt );
	//D.push_back( new HT_KolmogorovSmirnovTest );
	//D.push_back( new HT_LehmannRosenblatt );
	
	// ---- Two-Sample tests for Right-Censored Case ----
	D.push_back( new HT_Peto );
	D.push_back( new HT_Gehan );
	D.push_back( new HT_Logrank );
	
	D.push_back( new HT_BagdonaviciusNikulinGeneralizedCox );
	D.push_back( new HT_BagdonaviciusNikulinMultiple );
	D.push_back( new HT_BagdonaviciusNikulinSingle );

	D.push_back( new HT_QTest );			//based on the Kaplan-Meier estimator
	D.push_back( new HT_MAX );				//Maximum Value test
	D.push_back( new HT_SynthesisTest );	//MIN3 test
	
	D.push_back( new HT_WeightedLogrank("logrank") );
	D.push_back( new HT_WeightedLogrank("Tarone–Ware") );
	D.push_back( new HT_WeightedLogrank("Breslow") );
	D.push_back( new HT_WeightedLogrank("Peto–Prentice") );
	D.push_back( new HT_WeightedLogrank("Prentice") );
	
	D.push_back( new HT_WeightedKaplanMeyer );
		
	return D;
}

// Example of two-sample testing using this code
void EXAMPLE_1(vector<HomogeneityTest*> &D)
{
	// load the samples
	Sample T1(".//samples//1Chemotherapy.txt");
	Sample T2(".//samples//2Radiotherapy.txt");

	// two-sample testing through selected tests
	for(int j=0; j<D.size(); j++)
	{
		char test_name[512];
		D[j]->TitleTest(test_name);
		

		double Sn = D[j]->CalculateStatistic(T1, T2);
		double pvalue = D[j]->p_value(T1, T2, 27000);  // 27k in accodring to the Kolmogorov's theorem => simulation error MAX||G(S|H0)-Gn(S|H0)|| <= 0.01

		printf("%s\n", &test_name);
		printf("\t Sn: %lf\n", Sn);
		printf("\t pv: %lf\n", pvalue);
		printf("--------------------------------");
	}
}

// Example of the dataset simulation for the proposed ML-method
void EXAMPLE_2(vector<HomogeneityTest*> &D)
{
	// Run dataset (train or test sample) simulation (results in ".//to_machine_learning_2024//")
	simulation_for_machine_learning sm(D);
}

// init point
int main()
{
	// Set the number of threads
	int k = omp_get_max_threads() - 1;
	omp_set_num_threads( k );

	// Select two-sample tests
	auto D = AllTests();
	
	// Example of two-sample testing using this code
	EXAMPLE_1(D);

	// Example of the dataset simulation for the proposed ML-method
	EXAMPLE_2(D);

	// Freeing memory
	ClearMemory(D);
	
	printf("The mission is completed.\n");
	return 0;
}
```