Questions about scenario design

#2
by LeeHW - opened

Hello, thank you for your outstanding work and reply!

I have questions about the camera (camera[6,7,8]), how was the camera position designed during the whole scene design, I observed that the height of the camera position (distance from the top to the center : distance from the bottom to the center) in Fig.1 of the ETH data: is almost 1:1, whereas in the GazeGene data it looks closer to 2:1 from Fig.5.

What is the reason for designing the positions in this way, and what is the impact of such a design on the results (e.g., can address robustness in top-down viewpoint scenarios, robustness in scenarios where the eye is unobservable), given that there is a lot of unobservable eyeball data in the data captured by 678

I have a similar question. Would the settings of these three cameras affect the evaluation of gaze angles? In the cross-domain gaze estimation experiments reported in the paper, did you use data from all cameras in the dataset directly, or only a subset of the cameras?

Looking forward to your reply.

Hello, thank you for your outstanding work and reply!

I have questions about the camera (camera[6,7,8]), how was the camera position designed during the whole scene design, I observed that the height of the camera position (distance from the top to the center : distance from the bottom to the center) in Fig.1 of the ETH data: is almost 1:1, whereas in the GazeGene data it looks closer to 2:1 from Fig.5.

What is the reason for designing the positions in this way, and what is the impact of such a design on the results (e.g., can address robustness in top-down viewpoint scenarios, robustness in scenarios where the eye is unobservable), given that there is a lot of unobservable eyeball data in the data captured by 678

I still have a question. In the paper, Tab.2 says "The loss function is the L1 loss between the predicted 3D unit head gaze vector and the ground truth." If 3 values ​​are used to calculate the L1 loss, the iteration efficiency will be faster than using 2 values ​​[pitch, yaw]. Will this lead to unfairness in the same epoch?

Hello, thank you for your outstanding work and reply!

I have questions about the camera (camera[6,7,8]), how was the camera position designed during the whole scene design, I observed that the height of the camera position (distance from the top to the center : distance from the bottom to the center) in Fig.1 of the ETH data: is almost 1:1, whereas in the GazeGene data it looks closer to 2:1 from Fig.5.

What is the reason for designing the positions in this way, and what is the impact of such a design on the results (e.g., can address robustness in top-down viewpoint scenarios, robustness in scenarios where the eye is unobservable), given that there is a lot of unobservable eyeball data in the data captured by 678

Hi! Our target is not to copy the setting of ETH-XGaze, thus we do not copy the camera setting of ETH-XGaze. Those data with unobservable eyeballs are the results of combination of both large camera angles and head pose angles. If you wish to exclude such data, we recommend simply filtering out all samples where the Head Pose pitch or yaw exceeds 90 degrees.

I have a similar question. Would the settings of these three cameras affect the evaluation of gaze angles? In the cross-domain gaze estimation experiments reported in the paper, did you use data from all cameras in the dataset directly, or only a subset of the cameras?

Looking forward to your reply.

Hi! We simply use all samples of GazeGene for training and testing in the cross-domain experiments reported in the paper.

Hello, thank you for your outstanding work and reply!

I have questions about the camera (camera[6,7,8]), how was the camera position designed during the whole scene design, I observed that the height of the camera position (distance from the top to the center : distance from the bottom to the center) in Fig.1 of the ETH data: is almost 1:1, whereas in the GazeGene data it looks closer to 2:1 from Fig.5.

What is the reason for designing the positions in this way, and what is the impact of such a design on the results (e.g., can address robustness in top-down viewpoint scenarios, robustness in scenarios where the eye is unobservable), given that there is a lot of unobservable eyeball data in the data captured by 678

I still have a question. In the paper, Tab.2 says "The loss function is the L1 loss between the predicted 3D unit head gaze vector and the ground truth." If 3 values ​​are used to calculate the L1 loss, the iteration efficiency will be faster than using 2 values ​​[pitch, yaw]. Will this lead to unfairness in the same epoch?

For models trained in other datasets, we also use 3D unit gaze vector for training for fairness.

vigil1917 changed discussion status to closed

Sign up or log in to comment