Gilfoyle727 commited on
Commit
ba9f51f
·
verified ·
1 Parent(s): 6999e10

Add files using upload-large-folder tool

Browse files
Dataprocessing_code.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a41c9d52a2546e4ec565d011db4e5973af9f2289025b255150093e70c09c770
3
+ size 14123
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: VR Ray Pointer Landing Pose Dataset
3
+ task_categories:
4
+ - time-series-forecasting
5
+ - other
6
+ language:
7
+ - en
8
+ tags:
9
+ - virtual-reality
10
+ - vr
11
+ - raycasting
12
+ - multimodal
13
+ - eye-tracking
14
+ - motion-capture
15
+ - time-series
16
+ - human-computer-interaction
17
+ size_categories:
18
+ - 1M<n<10M
19
+ configs:
20
+ - config_name: raw_archives
21
+ data_files:
22
+ - split: study1
23
+ path: Study1_Raw.zip
24
+ - split: study2
25
+ path: Study2_Raw.zip
26
+ license: other
27
+ ---
28
+
29
+ # VR Ray Pointer Landing Pose Dataset
30
+
31
+ This dataset accompanies the paper **"Predicting Ray Pointer Landing Poses in VR Using Multimodal LSTM-Based Neural Networks."** It contains the raw trajectory archives used for the paper's two user studies, plus the original data processing code used to prepare model inputs.
32
+
33
+ The data captures bare-hand raycasting selection behavior in VR with multimodal time-series signals from hand, head-mounted display (HMD), and gaze channels. The paper reports that the full dataset covers **72,096 trials** across two empirical studies:
34
+
35
+ - Study 1: 55,296 trials
36
+ - Study 2: 16,800 trials
37
+
38
+ ## Paper Summary
39
+
40
+ The paper studies target-agnostic prediction of the final ray landing pose during VR pointing and selection. The proposed model is an LSTM-based predictor trained on time-series features derived from three modalities:
41
+
42
+ - hand movement
43
+ - HMD movement
44
+ - eye gaze movement
45
+
46
+ According to the paper:
47
+
48
+ - Study 1 recruited **16 participants**
49
+ - Study 2 recruited **8 new participants**
50
+ - Data was recorded at **90 Hz**
51
+ - Hardware used a **Meta Quest Pro**
52
+ - The model achieved an average prediction error of **4.6 degrees at 50% movement progress**
53
+
54
+ ## Included Files
55
+
56
+ - `Study1_Raw.zip`
57
+ Raw CSV trajectories for Study 1.
58
+ - `Study2_Raw.zip`
59
+ Raw CSV trajectories for Study 2.
60
+ - `Dataprocessing_code.zip`
61
+ Original preprocessing scripts provided by the authors.
62
+ - `data_processing_code/`
63
+ Extracted copy of the preprocessing scripts for easier browsing on Hugging Face.
64
+
65
+ ## Data Format
66
+
67
+ Each raw archive contains per-participant CSV files with frame-level trajectories. Typical columns include:
68
+
69
+ - participant / block / trial identifiers
70
+ - error flag
71
+ - target geometry variables such as depth, theta, phi, width, and position
72
+ - task progress and distance traveled percentage
73
+ - timestamp
74
+ - HMD position and forward vector
75
+ - hand position and forward vector
76
+ - left-eye position and forward vector
77
+ - right-eye position and forward vector
78
+ - target location and target scale
79
+
80
+ The data is sampled over time during reciprocal pointing selections.
81
+
82
+ ## Study Design From The Paper
83
+
84
+ ### Study 1
85
+
86
+ The paper describes Study 1 as a within-subjects design over:
87
+
88
+ - target depth combinations: `De` and `Ds` in `{3m, 6m, 9m}`
89
+ - theta values: `10, 15, 20, 25, 50, 75` degrees
90
+ - phi values: `0` to `315` degrees in `45` degree steps
91
+ - target widths: `4.5` and `9` degrees
92
+
93
+ The paper reports:
94
+
95
+ - `55,296` total trials
96
+ - `16` participants
97
+ - reciprocal 3D pointing with no distractors
98
+
99
+ ### Study 2
100
+
101
+ The paper describes Study 2 as a validation study with:
102
+
103
+ - `8` new participants
104
+ - theta varying continuously across all integer values from `15` to `84` degrees
105
+ - `350` trial combinations
106
+ - `50` blocks
107
+ - `6` reciprocal selections per trial combination
108
+ - `2,100` trials per participant
109
+
110
+ The paper reports `16,800` total trials for Study 2.
111
+
112
+ ## Important Notes About The Raw Archives
113
+
114
+ This repository preserves the raw files exactly as provided by the dataset owner. A few practical details matter when using the archives:
115
+
116
+ - `Study1_Raw.zip` currently contains **19 CSV files**
117
+ - `Study2_Raw.zip` currently contains **8 CSV files**
118
+ - the observed raw trial counts are **64,308** trials in `Study1_Raw.zip` and **16,800** trials in `Study2_Raw.zip`
119
+ - some Study 1 CSV files do **not** include a `ParticipantID` column in the header
120
+ - some Study 1 and Study 2 files share participant-like file IDs such as `72`
121
+ - raw archive contents therefore do not map one-to-one to the participant counts reported in the paper without additional curation context
122
+ - specifically, `Study1_Raw.zip` includes a `72_Trajectory.csv` file with **2,100** trials, which matches the Study 2 per-participant protocol rather than the Study 1 per-participant total of **3,456** trials reported in the paper
123
+
124
+ For reproducibility, this repository keeps the original archives unchanged. When reconstructing participant identity for Study 1, you may need to use the filename as the participant identifier when `ParticipantID` is absent from the CSV header.
125
+
126
+ ## Recommended Usage
127
+
128
+ - Use `Study1_Raw.zip` and `Study2_Raw.zip` as the authoritative raw data sources.
129
+ - Use the scripts in `data_processing_code/` to reproduce feature engineering and preprocessing.
130
+ - If you build a Hugging Face `datasets` loader on top of this repository, treat the raw zip files as the source of truth rather than assuming fully standardized CSV schemas.
131
+
132
+ ## Citation
133
+
134
+ If you use this dataset, please cite the paper:
135
+
136
+ ```bibtex
137
+ @inproceedings{xu2025predictingray,
138
+ title={Predicting Ray Pointer Landing Poses in VR Using Multimodal LSTM-Based Neural Networks},
139
+ author={Xu, Wenxuan and Wei, Yushi and Hu, Xuning and Stuerzlinger, Wolfgang and Wang, Yuntao and Liang, Hai-Ning},
140
+ booktitle={IEEE Conference on Virtual Reality and 3D User Interfaces},
141
+ year={2025}
142
+ }
143
+ ```
144
+
145
+ ## Acknowledgements
146
+
147
+ This dataset was collected for the paper above and uploaded to Hugging Face by the dataset owner.
Study1_Raw.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c4dad3bcab68216faa886cd7d2eb32c2f39dfcf189aae38191aa7ab8558beac
3
+ size 606994834
Study2_Raw.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:549a942f9c5c72c92a54266269e0b796d5f9c3303c7403f47077340c4c5f0547
3
+ size 186961994
data_processing_code/Augumentation.py ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import numpy as np
3
+ from concurrent.futures import ThreadPoolExecutor
4
+ max_timesteps=299
5
+ feature_num=16
6
+ label_nums=9
7
+
8
+ def generate_partial_sequences(row, max_timesteps=max_timesteps, features_per_timestep=feature_num, fill_value=-10):
9
+ # 确定实际长度
10
+ actual_length_indices = np.where(row[:-label_nums] != fill_value)[0] # 排除最后的标签列
11
+ if len(actual_length_indices) > 0:
12
+ actual_length = (actual_length_indices[-1] // features_per_timestep) + 1
13
+ else:
14
+ actual_length = 0
15
+ partial_sequences = []
16
+ # step_size = max(1, int(actual_length * 0.1)) # 步长为实际长度的10%,至少为1
17
+ step_size =5
18
+
19
+ for end_length in range(step_size, actual_length + step_size, step_size):
20
+ # 计算结束点,不超过实际长度
21
+ end_length = min(end_length, actual_length)
22
+ partial_sequence_list = row[:end_length * features_per_timestep].tolist()
23
+
24
+ selected_features_list = [partial_sequence_list[i:i + 10] for i in
25
+ range(0, len(partial_sequence_list), features_per_timestep)]
26
+ selected_features_list = [item for sublist in selected_features_list for item in sublist]
27
+
28
+ ProgressOfTask= end_length/actual_length
29
+
30
+ hand_rotation_axis = partial_sequence_list[-6:-3]
31
+ hand_direction = partial_sequence_list[-3:]
32
+
33
+ padding_length = (max_timesteps - end_length) * (features_per_timestep-6) # 计算填充长度
34
+ selected_features_list.extend([fill_value] * padding_length) # 添加填充
35
+
36
+ selected_features_list.extend(row[-9:]) # 添加标签
37
+ selected_features_list.extend(hand_rotation_axis) # 添加HandRotationAxis和HandDirection
38
+ selected_features_list.extend(hand_direction)
39
+ selected_features_list.append(ProgressOfTask) # 添加ProgressOfTask
40
+
41
+ partial_sequences.append(selected_features_list)
42
+ return partial_sequences
43
+
44
+
45
+ def process_row(index, df):
46
+ """Wrapper function to handle the DataFrame row."""
47
+ row=df.iloc[index]
48
+ return generate_partial_sequences(row)
49
+
50
+
51
+ def main(df, num_threads=20):
52
+ """Process the DataFrame using multiple threads."""
53
+ with ThreadPoolExecutor(max_workers=num_threads) as executor:
54
+ # 创建一个future列表,对每一行数据并行应用 process_row 函数
55
+ futures = [executor.submit(process_row, index, df) for index in range(len(df))]
56
+ # 使用 as_completed 来获取已完成的future结果
57
+ results = []
58
+ for future in futures:
59
+ results.extend(future.result())
60
+ # 将结果转换为 DataFrame
61
+
62
+ columns = df.columns
63
+ # 选择不以'HandRotationAxis'和'HandDirection'开头的列
64
+ columns_to_keep = [column for column in columns if
65
+ not column.startswith('HandRotationAxis') and not column.startswith('HandDirection')]
66
+
67
+ # 现在添加具体的旋转轴和方向列到列表的末尾
68
+ # 如果有特定的顺序要求,这里按照特定顺序添加
69
+ columns_to_keep.extend([
70
+ 'HandRotationAxis_X', 'HandRotationAxis_Y', 'HandRotationAxis_Z',
71
+ 'HandDirection_X', 'HandDirection_Y', 'HandDirection_Z',
72
+ 'ProgressOfTask'
73
+ ])
74
+ #print(len(columns_to_keep))
75
+ partial_sequences_df = pd.DataFrame(results, columns=columns_to_keep)
76
+ return partial_sequences_df
77
+
78
+ if __name__ == '__main__':
79
+ #加载数据集
80
+ for i in range(79, 80):
81
+ # if i ==3 or i ==6 or i ==15 or i ==19 or i== 22:
82
+ # continue
83
+ file_path = f'../Data/Study2Evaluation/Supervised/{i}_train_data_preprocessed_evaluation.csv'
84
+ df = pd.read_csv(file_path)
85
+ partial_sequences_df = main(df)
86
+ save_path_csv = f'../Data/Study2Evaluation/Dataset/{i}_traindataset.csv'
87
+ partial_sequences_df.to_csv(save_path_csv, index=False)
88
+
89
+ file_path = f'../Data/Study2Evaluation/Supervised/{i}_test_data_preprocessed_evaluation.csv'
90
+ df = pd.read_csv(file_path)
91
+ partial_sequences_df = main(df)
92
+ save_path_csv = f'../Data/Study2Evaluation/Dataset/{i}_testdataset.csv'
93
+ partial_sequences_df.to_csv(save_path_csv, index=False)
94
+
95
+
data_processing_code/DA.py ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #%%
2
+ import pandas as pd
3
+ import pandas as pd
4
+
5
+ data = pd.read_csv("D:\\NN\\Data\\Study1AllUsers\\Cleaned_TrialResultsFull.csv")
6
+
7
+ # 分组并计算均值
8
+ grouped_data = data.groupby(['ParticipantID']).agg({
9
+ 'AngularDistanceHMD': 'mean',
10
+ 'AngularDistanceHand': 'mean',
11
+ 'AngularDistanceLeye': 'mean'
12
+ }).reset_index()
13
+
14
+ print(grouped_data)
15
+
16
+ output_path = 'D:\\NN\\Data\\Study1AllUsers\\ModalityAnalyse.csv' # 替换为你的输出文件路径
17
+ grouped_data.to_csv(output_path)
18
+ # # 创建一个唯一的条件列,将 Depth, Theta, Width, Position 的组合转为单一标识符
19
+ # grouped_data['Condition'] = grouped_data['Depth'].astype(str) + '_' + grouped_data['Theta'].astype(str) + '_' + grouped_data['Width'].astype(str) + '_' + grouped_data['Position'].astype(str)
20
+ # # 转换数据为宽格式
21
+ # wide_data = grouped_data.pivot_table(index='ParticipantID',
22
+ # columns='Condition',
23
+ # values=['MovementTime', 'AngularDistanceHMD', 'AngularDistanceHand', 'AngularDistanceLeye'])
24
+ #
25
+ # # 为了更好地兼容性,重命名列
26
+ # wide_data.columns = ['_'.join(col).strip() for col in wide_data.columns.values]
27
+ #
28
+ # # 输出查看转换后的数据
29
+ # print(wide_data.head())
30
+ #
31
+ # # 保存为CSV文件,以便于导入SPSS
32
+ # output_path = 'path_to_your_output_file.csv' # 替换为你的输出文件路径
33
+ # wide_data.to_csv(output_path)
34
+
35
+ #%%
36
+ import pandas as pd
37
+ import numpy as np
38
+ from statsmodels.stats.correlation_tools import cov_nearest
39
+ from scipy.stats import chi2
40
+
41
+
42
+ # Load your data
43
+ data = pd.read_csv("D:\\NN\\Data\\Study1AllUsers\\Cleaned_TrialResultsFull.csv")
44
+ # data['Depth'] = data['Depth'].astype(str)
45
+ # data['Theta'] = data['Theta'].astype(str)
46
+ # data['Width'] = data['Width'].astype(str)
47
+ # data['Position'] = data['Position'].astype(str)
48
+ # columns = ['ParticipantID', 'BlockID', 'TrialID', 'MovementTime', 'Depth', 'Theta', 'Width','Position']
49
+ columns = ['ParticipantID', 'BlockID', 'TrialID', 'MovementTime','AngularDistanceHMD','AngularDistanceHand','AngularDistanceLeye', 'Depth', 'Theta', 'Width','Position']
50
+ # Aggregating data for each user under each condition
51
+ data= data[columns]
52
+
53
+ grouped_data = data.groupby(['ParticipantID', 'Depth', 'Theta', 'Width','Position']).agg({
54
+ 'AngularDistanceLeye': 'mean'
55
+ }).reset_index()
56
+
57
+ # 创建一个唯一的条件列,将 Depth, Theta, Width, Position 的组合转为单一标识符
58
+ grouped_data['Condition'] = grouped_data['Depth'].astype(str) + '_' + grouped_data['Theta'].astype(str) + '_' + grouped_data['Width'].astype(str)+ '_' + grouped_data['Position'].astype(str)
59
+
60
+ # 转换数据为宽格式
61
+ wide_data = grouped_data.pivot_table(index='ParticipantID',
62
+ columns='Condition',
63
+ values=['AngularDistanceLeye'])
64
+ # 为了更好地兼容性,重命名列
65
+ wide_data.columns = ['_'.join(col).strip() for col in wide_data.columns.values]
66
+
67
+ # 输出查看转换后的数据
68
+ print(wide_data.head())
69
+ output_path = 'D:\\NN\\Data\\Study1AllUsers\\EyeDistance.csv' # 替换为你的输出文件路径
70
+ wide_data.to_csv(output_path)
71
+
72
+
73
+ # output_path = 'D:\\NN\\Data\\Study1AllUsers\\Cleaned_TrialResultsFull1.csv' # 替换为你的输出文件路径
74
+ # grouped_data.to_csv(output_path, index=False)
75
+ # grouped_data = data.groupby(['ParticipantID', 'Depth', 'Theta', 'Width', 'Position']).agg({
76
+ # 'MovementTime': 'mean',
77
+ # 'AngularDistanceHMD': 'mean',
78
+ # 'AngularDistanceHand': 'mean',
79
+ # 'AngularDistanceLeye': 'mean'
80
+ # }).reset_index()
81
+
82
+
83
+
84
+
85
+
86
+ # #%%
87
+ # import pandas as pd
88
+ # import numpy as np
89
+ # from sklearn.linear_model import LinearRegression
90
+ # import matplotlib.pyplot as plt
91
+ #
92
+ # # Correcting the data based on the user's indication
93
+ # data_corrected = {
94
+ # "Theta": [10, 10, 15, 15, 20, 20, 25, 25, 50, 50, 75, 75],
95
+ # "Width": [4.5, 9.0, 4.5, 9.0, 4.5, 9.0, 4.5, 9.0, 4.5, 9.0, 4.5, 9.0],
96
+ # "Mean": [704.222126, 508.689598, 797.962563, 560.906088, 904.062486, 646.458888,
97
+ # 1183.485047, 796.196496, 1464.353523, 1034.035743, 1728.876132, 1266.901965]
98
+ # }
99
+ #
100
+ #
101
+ # model = LinearRegression()
102
+ #
103
+ # df_corrected = pd.DataFrame(data_corrected)
104
+ #
105
+ # # Compute the index of difficulty again
106
+ # df_corrected['ID'] = np.log2(df_corrected['Theta'] / df_corrected['Width'] + 1)
107
+ #
108
+ # # Re-run linear regression
109
+ # model.fit(df_corrected[['ID']], df_corrected['Mean'])
110
+ #
111
+ # # Predict values using the fitted model
112
+ # df_corrected['Predicted'] = model.predict(df_corrected[['ID']])
113
+ # a_corrected = model.intercept_
114
+ # b_corrected = model.coef_[0]
115
+ #
116
+ # # Recalculate R-squared value
117
+ # r_squared_corrected = model.score(df_corrected[['ID']], df_corrected['Mean'])
118
+ #
119
+ # # Plotting the corrected data
120
+ # plt.figure(figsize=(16, 9))
121
+ # plt.scatter(df_corrected['ID'], df_corrected['Mean'], color='blue', label='Observed Data')
122
+ # plt.plot(df_corrected['ID'], df_corrected['Predicted'], color='darkblue', linestyle='dashed', label='Fitted Line')
123
+ #
124
+ # plt.xlabel('Index of Difficulty (bits)')
125
+ # plt.ylabel('Movement Time (ms)')
126
+ # plt.grid(True)
127
+ # plt.legend()
128
+ # plt.text(3.5, 1100, f'R² = {r_squared_corrected:.4f}', fontsize=12)
129
+ #
130
+ # plt.show(), (a_corrected, b_corrected, r_squared_corrected)
131
+
132
+ import pandas as pd
133
+ #%%
134
+ import pandas as pd
135
+ from statsmodels.stats.anova import AnovaRM
136
+ import statsmodels.api as sm
137
+ # 加载数据
138
+ # 读取数据
139
+ data = pd.read_csv("D:\\NN\\Data\\Study1AllUsers\\Cleaned_TrialResultsFull.csv")
140
+ # 确保分类变量为字符串格式
141
+ data['Depth'] = data['Depth'].astype(str)
142
+ data['Theta'] = data['Theta'].astype(str)
143
+ data['Width'] = data['Width'].astype(str)
144
+
145
+ # 筛选掉特定参与者的数据
146
+ filtered_data = data[~data['ParticipantID'].isin([3, 6, 15, 19, 18, 20, 22])]
147
+ # 重新进行数据聚合
148
+ filtered_aggregated_data = filtered_data.groupby(['ParticipantID', 'Depth', 'Theta', 'Width']).mean().reset_index()
149
+ print(filtered_aggregated_data)
150
+
151
+ # 执行重复测量ANOVA,并应用Greenhouse-Geisser校正
152
+ rm_anova_results = AnovaRM(filtered_aggregated_data, 'MovementTime', 'ParticipantID',
153
+ within=['Depth', 'Theta', 'Width'])
154
+
155
+ # 打印ANOVA结果摘要
156
+ print(rm_anova_results.summary())
157
+
158
+ # 计算eta squared
159
+ anova_table = rm_anova_results.anova_table
160
+ anova_table['eta_squared'] = (anova_table['F Value'] * anova_table['Num DF']) / \
161
+ (anova_table['F Value'] * anova_table['Num DF'] + anova_table['Den DF'])
162
+
163
+ # 打印带有eta squared的结果表格
164
+ print(anova_table[['F Value', 'Pr > F', 'eta_squared']])
165
+
166
+ #%%
167
+ from statsmodels.stats.multicomp import pairwise_tukeyhsd
168
+
169
+ # Prepare the data for Tukey HSD tests
170
+ tukey_data = filtered_aggregated_data[['Theta', 'Width', 'MovementTime']]
171
+
172
+ # Perform Tukey HSD test for Theta
173
+ tukey_result_theta = pairwise_tukeyhsd(endog=tukey_data['MovementTime'], groups=tukey_data['Theta'], alpha=0.05)
174
+ # Perform Tukey HSD test for Width
175
+ tukey_result_width = pairwise_tukeyhsd(endog=tukey_data['MovementTime'], groups=tukey_data['Width'], alpha=0.05)
176
+
177
+ tukey_result_theta.summary(), tukey_result_width.summary()
178
+
179
+ #%%
180
+ print(tukey_result_theta.summary())
181
+ print(tukey_result_width.summary())
182
+ #%%
183
+ for width_level in filtered_aggregated_data['Width'].unique():
184
+ subset = filtered_aggregated_data[filtered_aggregated_data['Width'] == width_level]
185
+ print(f'Tukey HSD for Width {width_level}:')
186
+ print(pairwise_tukeyhsd(subset['MovementTime'], subset['Theta'], alpha=0.05).summary())
187
+
188
+ # 遍历每个Theta水平
189
+ for theta_level in filtered_aggregated_data['Theta'].unique():
190
+ subset = filtered_aggregated_data[filtered_aggregated_data['Theta'] == theta_level]
191
+ print(f'Tukey HSD for Theta {theta_level}:')
192
+ print(pairwise_tukeyhsd(subset['MovementTime'], subset['Width'], alpha=0.05).summary())
193
+
194
+
195
+ #%%
196
+ import pandas as pd
197
+ from statsmodels.stats.anova import AnovaRM
198
+ # 加载数据
199
+ data = pd.read_csv("D:\\NN\\Data\\Study1AllUsers\\TrialResultsFull.csv")
200
+ # 选择需要分析的列,并确保分类变量为字符串格式
201
+ data['Depth'] = data['Depth'].astype(str)
202
+ data['Theta'] = data['Theta'].astype(str)
203
+ data['Width'] = data['Width'].astype(str)
204
+ # 筛选掉特定参与者的数据
205
+ filtered_data = data[~data['ParticipantID'].isin([3, 6, 15, 19, 18, 20, 22])]
206
+ filtered_aggregated_data = filtered_data.groupby(['ParticipantID', 'Depth', 'Theta', 'Width', 'Position']).mean().reset_index()
207
+ print(filtered_aggregated_data)
208
+ # 执行重复测量ANOVA
209
+ rm_anova_results = AnovaRM(filtered_aggregated_data, 'AngularDistanceHand', 'ParticipantID', within=['Depth', 'Theta', 'Width', 'Position']).fit()
210
+ print(rm_anova_results.summary())
211
+ anova_table = rm_anova_results.anova_table
212
+ anova_table['eta_squared'] = (anova_table['F Value'] * anova_table['Num DF']) / \
213
+ (anova_table['F Value'] * anova_table['Num DF'] + anova_table['Den DF'])
214
+ print(anova_table[['F Value', 'Pr > F', 'eta_squared']])
data_processing_code/DA2.py ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #%%
2
+ import numpy as np
3
+ import pandas as pd
4
+
5
+
6
+ # Function to calculate the angle between two vectors
7
+ def angle_between_vectors(v1, v2):
8
+ unit_v1 = v1 / np.linalg.norm(v1)
9
+ unit_v2 = v2 / np.linalg.norm(v2)
10
+ dot_product = np.dot(unit_v1, unit_v2)
11
+ angle = np.arccos(np.clip(dot_product, -1.0, 1.0))
12
+ return angle
13
+
14
+
15
+ # Function to process each group and add new column
16
+ def process_and_add_columns(group):
17
+ if group['isError'].iloc[0]: # Skip groups where isError is True
18
+ group['HMD'] = np.nan # Assign NaN for HMD vectors
19
+ group['EYE'] = np.nan # Assign NaN for Leye vectors
20
+ return group
21
+
22
+ # Extract HMD vectors
23
+ hmd_vectors = group[['HMDForwardVX', 'HMDForwardVY', 'HMDForwardVZ']].to_numpy()
24
+ start_hmd_vector = hmd_vectors[0]
25
+ end_hmd_vector = hmd_vectors[-1]
26
+
27
+ # Calculate angle A for HMD vectors
28
+ hmd_angle_A = angle_between_vectors(start_hmd_vector, end_hmd_vector)
29
+
30
+ # Calculate angle B for each HMD vector and then B/A
31
+ group['HMD'] = [angle_between_vectors(start_hmd_vector, vec) / hmd_angle_A if hmd_angle_A != 0 else 0 for vec in hmd_vectors]
32
+
33
+ # Extract Leye vectors
34
+ leye_vectors = group[['LeyeForwardVX', 'LeyeForwardVY', 'LeyeForwardVZ']].to_numpy()
35
+ start_leye_vector = leye_vectors[0]
36
+ end_leye_vector = leye_vectors[-1]
37
+
38
+ # Calculate angle A for Leye vectors
39
+ leye_angle_A = angle_between_vectors(start_leye_vector, end_leye_vector)
40
+
41
+ # Calculate angle B for each Leye vector and then B/A
42
+ group['EYE'] = [angle_between_vectors(start_leye_vector, vec) / leye_angle_A if leye_angle_A != 0 else 0 for vec in leye_vectors]
43
+
44
+ return group
45
+
46
+
47
+ # Load your data
48
+
49
+ data = pd.read_csv('../Data/1_Trajectory.csv')
50
+ # Group by 'BlockID' and 'TrialID', then apply the function
51
+ processed_data = data.groupby(['BlockID', 'TrialID']).apply(process_and_add_columns).reset_index(drop=True)
52
+ # Now you can use processed_data as needed
53
+ print(processed_data.head())
54
+
55
+
56
+ #%%
57
+ import matplotlib.pyplot as plt
58
+ from scipy.ndimage import gaussian_filter1d
59
+ from scipy.interpolate import interp1d
60
+
61
+ filtered_data = processed_data[~processed_data['isError'] & processed_data['HMD'].notna() & processed_data['EYE'].notna() & processed_data['DistanceTraveledPercentage'].notna()]
62
+ filtered_data = filtered_data[filtered_data['ProgressofTask'] % 1 == 0]
63
+ # Group by ProgressofTask and calculate the mean for HMD and EYE
64
+ average_data = filtered_data.groupby('ProgressofTask')[['DistanceTraveledPercentage','HMD', 'EYE']].mean()
65
+
66
+ task_points = np.linspace(average_data.index.min(), average_data.index.max(), 500) # 创建100个均匀分布的点
67
+ interp_HMD = interp1d(average_data.index, average_data['HMD'], kind='cubic', fill_value='extrapolate')
68
+ interp_EYE = interp1d(average_data.index, average_data['EYE'], kind='cubic', fill_value='extrapolate')
69
+ interp_HAND = interp1d(average_data.index, average_data['DistanceTraveledPercentage']/100, kind='cubic', fill_value='extrapolate')
70
+
71
+ smoothed_HMD = gaussian_filter1d(interp_HMD(task_points), sigma=5)
72
+ smoothed_EYE = gaussian_filter1d(interp_EYE(task_points), sigma=7)
73
+ smoothed_HAND = gaussian_filter1d(interp_HAND(task_points), sigma=5)
74
+
75
+ # Plotting the curves
76
+ plt.figure(figsize=(16, 9))
77
+ plt.plot(task_points, smoothed_HAND, label='Hand')
78
+ plt.plot(task_points, smoothed_HMD, label='HMD')
79
+ plt.plot(task_points, smoothed_EYE, label='EYE')
80
+
81
+ plt.xlabel('Progress Of Task (%)')
82
+ plt.ylabel('Current Angular Movement / Final Angular Movement')
83
+ plt.legend()
84
+ plt.grid(True)
85
+ plt.show()
data_processing_code/WIdeFormat.py ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import numpy as np
3
+
4
+ # 加载数据
5
+ def load_data(file_path):
6
+ return pd.read_csv(file_path)
7
+ # 数据预处理,包括特征选择、后填充和展平
8
+ def preprocess_data(data, features, label):
9
+ # 根据ParticipantID, BlockID, 和 TrialID对数据进行分组
10
+ grouped = data.groupby(['ParticipantID', 'BlockID', 'TrialID'])
11
+ # 提取特征和标签,进行后填充处理
12
+ sequences = []
13
+ labels = []
14
+ for _, group in grouped:
15
+ sequence = group[features].values
16
+ sequence_label = group[label].values[0] # 假设每个序列的标签是相同的
17
+ sequences.append(sequence)
18
+ labels.append(sequence_label)
19
+ # 获取最大序列长度
20
+ max_len = 299
21
+ # 后填充处理
22
+ sequences_padded = [np.pad(seq, ((0, max_len - len(seq)), (0, 0)), 'constant', constant_values=(-10, -10)) for seq
23
+ in sequences]
24
+
25
+ # 展平每个序列
26
+ flattened_sequences = np.array([seq.flatten() for seq in sequences_padded])
27
+
28
+ return flattened_sequences, labels, max_len
29
+
30
+ # 保存处理后的数据
31
+ def save_transformed_data(flattened_sequences, labels_sequence, max_len, features,labels,output_file_path):
32
+ # 创建列名
33
+ column_names = [f"{feature}_t{time_step}" for time_step in range(1, max_len + 1) for feature in features]
34
+ # 创建DataFrame
35
+ flattened_df = pd.DataFrame(flattened_sequences, columns=column_names)
36
+ label_column_names=[f"{label}" for label in labels]
37
+ flattened_df[label_column_names]=labels_sequence
38
+ # 保存到Excel
39
+ flattened_df.to_csv(output_file_path, index=False)
40
+
41
+ # 主函数
42
+ if __name__ == "__main__":
43
+ # 定义文件路径和特征
44
+ for i in range(79, 80):
45
+ # if i ==3 or i ==6 or i ==15 or i ==19 or i== 22:
46
+ # continue
47
+ file_path = f'../Data/Study2Evaluation/Preprocessed/cleaned/{i}_train_data_preprocessed_evaluation.csv' # 更新为实际文件路径
48
+ output_file_path = f'../Data/Study2Evaluation/Supervised/{i}_train_data_preprocessed_evaluation.csv'
49
+
50
+ features = ["HMDA", "HMDAV", "HandA", "HandAV", "LeyeA", "LeyeAV", 'HMDL', "HMDLV", "HandL", "HandLV",
51
+ 'HandRotationAxis_X', 'HandRotationAxis_Y', 'HandRotationAxis_Z', 'HandDirection_X',
52
+ 'HandDirection_Y', 'HandDirection_Z']
53
+ labels = ['ParticipantID', 'BlockID', 'TrialID', 'TargetLocationX', 'TargetLocationY', 'TargetLocationZ',
54
+ 'TargetScale', 'LLabel', 'ALabel']
55
+ # 加载和处理数据
56
+ data = load_data(file_path)
57
+ flattened_sequences, labels_sequence, max_len = preprocess_data(data, features, labels)
58
+ # 保存转换后的数据
59
+ save_transformed_data(flattened_sequences, labels_sequence, max_len, features, labels, output_file_path)
60
+ print(f"Data transformed and saved to {output_file_path}")
61
+ file_path = f'../Data/Study2Evaluation/Preprocessed/cleaned/{i}_test_data_preprocessed_evaluation.csv' # 更新为实际文件路径
62
+ output_file_path = f'../Data/Study2Evaluation/Supervised/{i}_test_data_preprocessed_evaluation.csv'
63
+ data = load_data(file_path)
64
+ flattened_sequences, labels_sequence, max_len = preprocess_data(data, features, labels)
65
+ save_transformed_data(flattened_sequences, labels_sequence, max_len, features, labels, output_file_path)
66
+ print(f"Data transformed and saved to {output_file_path}")
67
+
68
+
data_processing_code/concact数据.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+
3
+ # Define user IDs and ignored users
4
+
5
+ user_ids = {75,79}
6
+
7
+ # Define the dataset path templates
8
+ train_dataset_path_template = '../Data/Study2Evaluation/preprocessed/cleaned/{user_id}_train_data_preprocessed_evaluation.csv'
9
+ test_dataset_path_template = '../Data/Study2Evaluation/preprocessed/cleaned/{user_id}_test_data_preprocessed_evaluation.csv'
10
+
11
+ # Initialize empty lists to store DataFrames
12
+
13
+ dataframes=[]
14
+
15
+ # Loop through each user ID
16
+ for user_id in user_ids:
17
+ # Construct file paths
18
+ train_file_path = train_dataset_path_template.format(user_id=user_id)
19
+ test_file_path = test_dataset_path_template.format(user_id=user_id)
20
+
21
+ # Read and concatenate train datasets
22
+ try:
23
+ train_df = pd.read_csv(train_file_path)
24
+ test_df = pd.read_csv(test_file_path)
25
+ dataframes.append(train_df)
26
+ dataframes.append(test_df)
27
+ except FileNotFoundError as e:
28
+ print(f"Train file not found for user {user_id}: {e}")
29
+
30
+ # Concatenate all train DataFrames into one large DataFrame
31
+ combined_train_df = pd.concat(dataframes, ignore_index=True)
32
+ # Optionally, you can save the combined DataFrames to new CSV files
33
+ combined_train_df.to_csv('../Data/Study2Evaluation/Dataset/combined_preprocessed_evaluation.csv', index=False)
34
+
35
+ print("Combined train and test datasets created successfully.")
data_processing_code/para.py ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import numpy as np
3
+ #
4
+ #
5
+ #
6
+ # def load_data(file_path):
7
+ # return pd.read_csv(file_path)
8
+ #
9
+ #
10
+ # def preprocess_data(data):
11
+ # # 按指定列分组
12
+ # grouped = data.groupby(['ParticipantID', 'BlockID', 'TrialID'])
13
+ #
14
+ # # 计算每个组的均值和标准差,再计算尺寸的上限
15
+ # group_sizes = grouped.size()
16
+ # upper_limit = group_sizes.mean() + 1 * group_sizes.std()
17
+ #
18
+ # # 使用filter方法保留组大小小于上限的数据
19
+ # filtered_data = grouped.filter(lambda x: len(x) < upper_limit)
20
+ # return filtered_data
21
+ #
22
+ #
23
+ # def save_data(data, path):
24
+ # # 确保目标文件夹存在
25
+ # os.makedirs(os.path.dirname(path), exist_ok=True)
26
+ # # 保存数据
27
+ # data.to_csv(path, index=False)
28
+ #
29
+ #
30
+ # # 主函数
31
+ # if __name__ == "__main__":
32
+ # for i in range(79, 80):
33
+ # output_train_path = f'../Data/Study2Evaluation/Preprocessed/{i}_train_data_preprocessed_evaluation.csv'
34
+ # output_test_path = f'../Data/Study2Evaluation/Preprocessed/{i}_test_data_preprocessed_evaluation.csv'
35
+ #
36
+ # data1 = load_data(output_train_path)
37
+ # data2 = load_data(output_test_path)
38
+ #
39
+ # cleaned_data1 = preprocess_data(data1)
40
+ # cleaned_data2 = preprocess_data(data2)
41
+ #
42
+ # # 定义保存路径
43
+ # save_path_train = f'../Data/Study2Evaluation/Preprocessed/cleaned/{i}_train_data_preprocessed_evaluation.csv'
44
+ # save_path_test = f'../Data/Study2Evaluation/Preprocessed/cleaned/{i}_test_data_preprocessed_evaluation.csv'
45
+ #
46
+ # # 保存清洗后的数据
47
+ # save_data(cleaned_data1, save_path_train)
48
+ # save_data(cleaned_data2, save_path_test)
49
+ #
50
+ # print(f"Cleaned data saved for {i} train and test.")
51
+
52
+ def load_data(file_path):
53
+ return pd.read_csv(file_path)
54
+
55
+ # 数据预处理,包括特征选择、后填充和展平
56
+ def preprocess_data(data):
57
+ grouped = data.groupby(['ParticipantID', 'BlockID', 'TrialID'])
58
+ group_sizes = grouped.size()
59
+ max_group_index = group_sizes.idxmax()
60
+ max_group_rows = grouped.get_group(max_group_index).shape[0]
61
+ return max_group_rows
62
+
63
+ if __name__ == "__main__":
64
+ max_rows=0
65
+ for i in range(79, 80):
66
+ # if i ==3 or i ==6 or i ==15 or i ==19 or i== 22:
67
+ # continue
68
+ output_train_path = f'../Data/Study2Evaluation/Preprocessed/cleaned/{i}_train_data_preprocessed_evaluation.csv'
69
+ output_test_path = f'../Data/Study2Evaluation/Preprocessed/cleaned/{i}_test_data_preprocessed_evaluation.csv'
70
+ data1=load_data(output_train_path)
71
+ data2=load_data(output_test_path)
72
+ max_group_rows1 = preprocess_data(data1)
73
+ max_group_rows2 = preprocess_data(data2)
74
+ max_rows=max(max_group_rows1,max_group_rows2)
75
+ print(max_rows)
76
+
77
+
78
+
79
+
80
+
81
+
data_processing_code/preprocess.py ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import argparse
3
+ import os
4
+ import numpy as np
5
+ from scipy.ndimage import gaussian_filter1d
6
+ from sklearn.preprocessing import MinMaxScaler
7
+
8
+
9
+
10
+ #这个文件是用来在原始文件的基础上删除一些Trial,进行特征工程,然后再打上标签
11
+ def angle_between_vectors(v1, v2):
12
+ """Calculate the angle between two vectors."""
13
+ v1 = v1 / np.linalg.norm(v1)
14
+ v2 = v2 / np.linalg.norm(v2)
15
+ u_minus_v = v1 - v2
16
+ u_plus_v = v1 + v2
17
+ angle = 2 * np.arctan2(np.linalg.norm(u_minus_v), np.linalg.norm(u_plus_v))
18
+ return np.degrees(angle)
19
+ def smooth_velocity(velocity, sigma=5):
20
+ return gaussian_filter1d(velocity, sigma=sigma)
21
+ def calculate_angular_velocity(forward_vectors, timestamps):
22
+ """计算角速度"""
23
+ angles = np.array(
24
+ [angle_between_vectors(forward_vectors[i], forward_vectors[i + 1]) if i + 1 < len(forward_vectors) else 0 for i
25
+ in range(len(forward_vectors) - 1)]) # 注意这里的变化,排除了最后一个向量的计算
26
+ # 计算时间间隔
27
+ times = np.diff(timestamps)/1000
28
+ # 计算角速度,对于第一个时间点,将角度设置为0
29
+ if len(times) > 0: # 检查times是否非空,避免除以空数组
30
+ angular_velocities = np.insert(angles / times, 0, 0) # 在结果数组的开始插入0
31
+ else:
32
+ angular_velocities = np.array([0]) # 如果times为空,说明只有一个时间点,无法计算速度
33
+ return angular_velocities
34
+ def calculate_linear_velocity(positions, timestamps):
35
+ """计算线性速度"""
36
+ distances = np.linalg.norm(np.diff(positions, axis=0), axis=1)
37
+ times = np.diff(timestamps)/1000
38
+ linear_velocities = np.insert(distances / times, 0,0) # Insert 0 at the start as there's no velocity at the first timestamp
39
+ return linear_velocities
40
+ def calculate_direction(start_position, current_position):
41
+ """计算方向"""
42
+ direction = current_position - start_position
43
+ norm = np.linalg.norm(direction)
44
+ return direction / norm if norm != 0 else np.zeros_like(direction)
45
+ def calculate_acceleration(velocities, timestamps):
46
+ """根据速度计算加速度"""
47
+ accels = [0] # 第一个时间点的加速度假设为0
48
+ for i in range(1, len(velocities)):
49
+ accel = (velocities[i] - velocities[i-1]) / ((timestamps[i] - timestamps[i-1])/1000)
50
+ accels.append(accel)
51
+ return np.array(accels)
52
+
53
+ ## 特征工程,先加上三个模态的角速度,和旋转角,再加上手的线性速度和移动距离
54
+ def process_single_sequence(df):
55
+ """处理groupBy之后的单个序列DataFrame,计算所有模态的角速度、线性速度、方向和旋转轴"""
56
+ # 提取时间戳
57
+ timestamps = df['TimeStamp'].values
58
+ # 定义所有需要计算的模态
59
+ modalities = ['HMD', 'Hand', 'Leye']
60
+ for modality in modalities:
61
+ # 计算角速度和旋转轴
62
+ if all(f'{modality}ForwardV{i}' in df.columns for i in ['X', 'Y', 'Z']):
63
+ forward_vectors = df[[f'{modality}ForwardVX', f'{modality}ForwardVY', f'{modality}ForwardVZ']].values
64
+ initial_forward_vector = forward_vectors[0]
65
+ df[f'{modality}A'] = [(angle_between_vectors(initial_forward_vector, fv)) for fv in forward_vectors]
66
+ angular_velocity = calculate_angular_velocity(forward_vectors, timestamps)
67
+ smoothed_angular_velocity = smooth_velocity(angular_velocity)
68
+ df[f'{modality}AV'] = smoothed_angular_velocity
69
+ df[f'{modality}AAcc'] = calculate_acceleration(smoothed_angular_velocity, timestamps)
70
+ # 计算旋转轴并进行标准化以仅保留方向信息
71
+ if modality == 'Hand':
72
+ initial_forward_vector = forward_vectors[0]
73
+ rotation_axes = np.array([np.cross(initial_forward_vector, fv) for fv in forward_vectors])
74
+ rotation_axes_normalized = np.array(
75
+ [axis / np.linalg.norm(axis) if np.linalg.norm(axis) != 0 else np.array([0.0, 0.0, 0.0]) for axis in
76
+ rotation_axes]
77
+ )
78
+ df[f'{modality}RotationAxis_X'] = rotation_axes_normalized[:, 0]
79
+ df[f'{modality}RotationAxis_Y'] = rotation_axes_normalized[:, 1]
80
+ df[f'{modality}RotationAxis_Z'] = rotation_axes_normalized[:, 2]
81
+
82
+ # 对于HMD和Hand,还需计算线性速度和方向还有移动距离
83
+ if modality in ['HMD', 'Hand'] and all(f'{modality}Position{i}' in df.columns for i in ['X', 'Y', 'Z']):
84
+ positions = df[[f'{modality}PositionX', f'{modality}PositionY', f'{modality}PositionZ']].values
85
+ initial_position = positions[0]
86
+ linear_velocity = calculate_linear_velocity(positions, timestamps)
87
+ df[f'{modality}L'] = [np.linalg.norm(pos-initial_position) for pos in positions]
88
+ smoothed_velocity = smooth_velocity(linear_velocity)
89
+ df[f'{modality}LV'] = smoothed_velocity
90
+ df[f'{modality}LAcc'] = calculate_acceleration(smoothed_velocity, timestamps) # 新特性:加速度
91
+ # 计算方向
92
+ if modality == 'Hand':
93
+ start_position = positions[0]
94
+ directions = np.array([calculate_direction(start_position, pos) for pos in positions])
95
+ df[f'{modality}Direction_X'] = directions[:, 0]
96
+ df[f'{modality}Direction_Y'] = directions[:, 1]
97
+ df[f'{modality}Direction_Z'] = directions[:, 2]
98
+
99
+ return df
100
+
101
+
102
+ def label_trials_with_motion_metrics(df):
103
+ # Define a helper function for labeling each group
104
+ def label_group(group):
105
+ # For the 'HandLinearDistance' and 'HandAngularDistance', take the last value in the group as it represents the total
106
+ total_linear_distance = group['HandL'].iloc[-1]
107
+ total_angular_distance = group['HandA'].iloc[-1]
108
+ # Assign these totals to new columns for every row in the group
109
+ group['LLabel'] = total_linear_distance
110
+ group['ALabel'] = total_angular_distance
111
+ return group
112
+ # Apply the labeling function to each trial group
113
+ df_labeled = df.groupby(['ParticipantID', 'BlockID', 'TrialID'], group_keys=False).apply(label_group)
114
+ return df_labeled
115
+
116
+
117
+ def split_data_by_theta_grouped(df):
118
+ grouped = df.groupby(['ParticipantID', 'BlockID', 'TrialID'])
119
+ theta_groups = {}
120
+ # 将每个组添加到对应Theta值的列表中
121
+ for _, group in grouped:
122
+ theta_value = group['Theta'].iloc[0]
123
+ if theta_value not in theta_groups:
124
+ theta_groups[theta_value] = []
125
+ theta_groups[theta_value].append(group)
126
+ train_dfs = []
127
+ test_dfs = []
128
+ for theta_value, groups in theta_groups.items():
129
+ # 计算训练集大小
130
+ n_train = int(len(groups) * 0.8)
131
+ # 随机选择训练集序列
132
+ np.random.seed(1) # 确保可重复性
133
+ train_indices = np.random.choice(len(groups), size=n_train, replace=False)
134
+ train_groups = [groups[i] for i in train_indices]
135
+ # 选择测试集序列
136
+ test_groups = [groups[i] for i in range(len(groups)) if i not in train_indices]
137
+ # 将训练集和测试集序列合并
138
+ train_df = pd.concat(train_groups)
139
+ test_df = pd.concat(test_groups)
140
+ train_dfs.append(train_df)
141
+ test_dfs.append(test_df)
142
+ # 合并所有训练集和测试集的DataFrame
143
+ final_train_df = pd.concat(train_dfs).sort_index()
144
+ final_test_df = pd.concat(test_dfs).sort_index()
145
+ return final_train_df, final_test_df
146
+
147
+
148
+ def split_data_by_theta_grouped(df):
149
+ grouped = df.groupby(['ParticipantID', 'BlockID', 'TrialID'])
150
+ theta_groups = {}
151
+ # 将每个组添加到对应Theta值的列表中
152
+ for _, group in grouped:
153
+ theta_value = group['Theta'].iloc[0]
154
+ if theta_value not in theta_groups:
155
+ theta_groups[theta_value] = []
156
+ theta_groups[theta_value].append(group)
157
+ train_dfs = []
158
+ test_dfs = []
159
+ for theta_value, groups in theta_groups.items():
160
+ # 计算训练集大小
161
+ n_train = int(len(groups) * 0.8)
162
+ # 随机选择训练集序列
163
+ np.random.seed(1) # 确保可重复性
164
+ train_indices = np.random.choice(len(groups), size=n_train, replace=False)
165
+ train_groups = [groups[i] for i in train_indices]
166
+ # 选择测试集序列
167
+ test_groups = [groups[i] for i in range(len(groups)) if i not in train_indices]
168
+ # 将训练集和测试集序列合并
169
+ train_df = pd.concat(train_groups)
170
+ test_df = pd.concat(test_groups)
171
+ train_dfs.append(train_df)
172
+ test_dfs.append(test_df)
173
+ # 合并所有训练集和测试集的DataFrame
174
+ final_train_df = pd.concat(train_dfs).sort_index()
175
+ final_test_df = pd.concat(test_dfs).sort_index()
176
+ return final_train_df, final_test_df
177
+
178
+
179
+ # 定义一个函数,将特征缩放到指定的范围内
180
+ def preprocess_features(train_df, test_df):
181
+ # 定义包含负值的特征列和其他特征列
182
+ negative_value_features = ['HandAAcc', 'HMDAAcc',"LeyeAAcc","HandLAcc","HMDLAcc"]
183
+ other_features = ["HMDA", "HMDAV", "HandA", "HandAV", "LeyeA", "LeyeAV","HMDL", "HMDLV","HandL", "HandLV"]
184
+ # 为包含负值的特征和其他特征创建两个不同的缩放器
185
+ scaler_negatives = MinMaxScaler(feature_range=(-1, 1))
186
+ scaler_others = MinMaxScaler(feature_range=(0, 1))
187
+ # 对训练集应用fit_transform,对测试集应用transform
188
+ train_df[negative_value_features] = scaler_negatives.fit_transform(train_df[negative_value_features])
189
+ test_df[negative_value_features] = scaler_negatives.transform(test_df[negative_value_features])
190
+ train_df[other_features] = scaler_others.fit_transform(train_df[other_features])
191
+ test_df[other_features] = scaler_others.transform(test_df[other_features])
192
+ return train_df, test_df
193
+
194
+ #这��function用于修改Evaluation的数据集
195
+ def main(Participant_ID):
196
+ # 读入数据
197
+ input_file_path = f'../Data/Study2Evaluation/{Participant_ID}_Trajectory.csv'
198
+ output_train_path = f'../Data/Study2Evaluation/Preprocessed/{Participant_ID}_train_data_preprocessed_evaluation.csv'
199
+ output_test_path = f'../Data/Study2Evaluation/Preprocessed/{Participant_ID}_test_data_preprocessed_evaluation.csv'
200
+
201
+ data=pd.read_csv(input_file_path)
202
+ data_cleaned = data.loc[:, ~data.columns.str.contains('^Unnamed')]
203
+ data_no_error = data_cleaned[data_cleaned['isError'] == False]
204
+ final_data = data_no_error[(data_no_error['TrialID'] != 0) & (data_no_error['TrialID'] != 6) & (data_no_error['TrialID'] != 12)
205
+ & (data_no_error['TrialID'] != 18) & (data_no_error['TrialID'] != 24) & (data_no_error['TrialID'] != 30) & (data_no_error['TrialID'] != 36)]
206
+ # 特征工程
207
+ processed_groups = final_data.groupby(['ParticipantID','BlockID', 'TrialID']).apply(process_single_sequence)
208
+ processed_df = processed_groups.reset_index(drop=True)
209
+
210
+ # 为处理完的数据添加标签
211
+ df_with_features_labelled = label_trials_with_motion_metrics(processed_df.copy())
212
+ final_train_df, final_test_df = split_data_by_theta_grouped(df_with_features_labelled)
213
+ # 特征预处理
214
+ final_train_df_preprocessed, final_test_df_preprocessed = preprocess_features(final_train_df.copy(), final_test_df.copy())
215
+ # 保存处理后的数据集到CSV文件
216
+ final_train_df_preprocessed.to_csv(output_train_path, index=False)
217
+ final_test_df_preprocessed.to_csv(output_test_path, index=False)
218
+
219
+
220
+ # def main(Participant_ID):
221
+ # # 读入数据
222
+ # input_file_path = f'../Data/{Participant_ID}_Trajectory.csv'
223
+ # output_train_path = f'../Data/Study1AllUSers/Preprocessed/{Participant_ID}_train_data_preprocessed.csv'
224
+ # output_test_path = f'../Data/Study1AllUSers/Preprocessed/{Participant_ID}_test_data_preprocessed.csv'
225
+ # data = pd. read_csv(input_file_path)
226
+ # participant_id = int(os.path.basename(input_file_path).split('_')[0])
227
+ # data.insert(0, 'ParticipantID', participant_id)
228
+ # data_cleaned = data.loc[:, ~data.columns.str.contains('^Unnamed')]
229
+ # data_no_error = data_cleaned[data_cleaned['isError'] == False]
230
+ # cleaned_data_path = '../Data/cleaned_data_trimmed.xlsx'
231
+ # cleaned_data = pd.read_excel(cleaned_data_path)
232
+ # filtered_data = pd.merge(data_no_error, cleaned_data, on=['ParticipantID', 'BlockID', 'TrialID'], how='inner')
233
+ # final_data = filtered_data[(filtered_data['TrialID'] != 0) & (filtered_data['TrialID'] != 8) & (filtered_data['TrialID'] != 16) & (filtered_data['TrialID'] != 24)]
234
+ # # 特征工程
235
+ # processed_groups = final_data.groupby(['ParticipantID','BlockID', 'TrialID']).apply(process_single_sequence)
236
+ # processed_df = processed_groups.reset_index(drop=True)
237
+ # # 为处理完的数据添加标签
238
+ # df_with_features_labelled = label_trials_with_motion_metrics(processed_df.copy())
239
+ # final_train_df, final_test_df = split_data_by_theta_grouped(df_with_features_labelled)
240
+ # # 特征预处理
241
+ # final_train_df_preprocessed, final_test_df_preprocessed = preprocess_features(final_train_df.copy(), final_test_df.copy())
242
+ # # 保存处理后的数据集到CSV文件
243
+ # final_train_df_preprocessed.to_csv(output_train_path, index=False)
244
+ # final_test_df_preprocessed.to_csv(output_test_path, index=False)
245
+
246
+ if __name__ == '__main__':
247
+ for i in range(79, 80):
248
+ main(str(i))
249
+
250
+
251
+
252
+
data_processing_code/test.py ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #%%
2
+ import pandas as pd
3
+ import numpy as np
4
+ df=pd.read_csv('../Data/72_Trajectory.csv')
5
+
6
+
7
+
8
+
9
+ # from concurrent.futures import ThreadPoolExecutor
10
+ # max_timesteps=296
11
+ # feature_num=16
12
+ # label_nums=9
13
+ #
14
+ # ##数据增强,准确版本
15
+ # def generate_partial_sequences(row, max_timesteps=max_timesteps, features_per_timestep=feature_num, fill_value=-10):
16
+ # # 确定实际长度
17
+ # actual_length_indices = np.where(row[:-label_nums] != fill_value)[0] # 排除最后的标签列
18
+ # if len(actual_length_indices) > 0:
19
+ # actual_length = (actual_length_indices[-1] // features_per_timestep) + 1
20
+ # else:
21
+ # actual_length = 0
22
+ # partial_sequences = []
23
+ # step_size = max(1, int(actual_length * 0.1)) # 步长为实际长度的10%,至少为1
24
+ # print(f'{actual_length},{step_size}')
25
+ #
26
+ # for end_length in range(step_size, actual_length + step_size, step_size):
27
+ # # 计算结束点,不超过实际长度
28
+ # end_length = min(end_length, actual_length)
29
+ # partial_sequence_list = row[:end_length * features_per_timestep].tolist()
30
+ #
31
+ # selected_features_list = [partial_sequence_list[i:i + 10] for i in
32
+ # range(0, len(partial_sequence_list), features_per_timestep)]
33
+ # selected_features_list = [item for sublist in selected_features_list for item in sublist]
34
+ #
35
+ # hand_rotation_axis = partial_sequence_list[-6:-3]
36
+ # hand_direction = partial_sequence_list[-3:]
37
+ #
38
+ # padding_length = (max_timesteps - end_length) * 10 # 计算填充长度
39
+ # selected_features_list.extend([fill_value] * padding_length) # 添加填充
40
+ #
41
+ # selected_features_list.extend(row[-9:]) # 添加标签
42
+ # selected_features_list.extend(hand_rotation_axis) # 添加HandRotationAxis和HandDirection
43
+ # selected_features_list.extend(hand_direction)
44
+ #
45
+ # print(len(selected_features_list))
46
+ #
47
+ # partial_sequences.append(selected_features_list)
48
+ # return partial_sequences
49
+ #
50
+ #
51
+ #
52
+ # df = pd.read_csv('../Data/testing/test_data_supervised.csv')
53
+ # partial_sequences_df = generate_partial_sequences(df.iloc[0])
upload_to_hf.py ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pathlib import Path
2
+
3
+ from huggingface_hub import HfApi
4
+
5
+
6
+ REPO_ID = "Gilfoyle727/vr-ray-pointer-landing-pose"
7
+ REPO_TYPE = "dataset"
8
+ LOCAL_DIR = Path(__file__).resolve().parent
9
+
10
+
11
+ def main() -> None:
12
+ api = HfApi()
13
+ api.create_repo(repo_id=REPO_ID, repo_type=REPO_TYPE, private=False, exist_ok=True)
14
+ api.upload_large_folder(
15
+ repo_id=REPO_ID,
16
+ repo_type=REPO_TYPE,
17
+ folder_path=str(LOCAL_DIR),
18
+ )
19
+ print(f"https://huggingface.co/datasets/{REPO_ID}")
20
+
21
+
22
+ if __name__ == "__main__":
23
+ main()