POQUITO Dataset that accompanies the paper: "Tiny Satellites, Big Challenges: A Feasibility Study of Machine Vision Pose Estimation for PocketQubes during Conjunctions" Niki Sajjad, Andrew Price, Mehran Mirshams and Mathieu Salzmann September 2024 The dataset formatting is adapted from the BOP format. The dataset is organized as: POQUITO_Dataset |---1_Distance_Check_Data |---Training |---Validation |---Testing |---2_Performance_Check_Data |---Training |---Validation |---Testing |---1_Stationary_cases |---2_Sequential_Trajectories |---Trajectory1 |---Trajectory2 |---Trajectory3 |---Trajectory4 |---Trajectory5 |---3_Satellite_Model |---POQUITO_bbox.json |---POQUITO.ply |---4_Earth_Background |---real_Earth_images |---real_Earth_images_cropped |---rendered_Earth_images |---README.txt [1] Contains images of the POQUITO target centred on the image. The distance from the camera to the POQUITO is slowly increased. [2] Contains rgb images with various post processing affects applied. -foc- camera depth of field (focal) blur applied as Gaussian blurring. -mot- motion blur applied with custom XiaolinWu filter kernels. -earth- images of the Earth inserted into the background. Testing contains two separate cases, <1_Stationary_cases> and <2_Sequential_Trajectories> /Testing/1_Stationary_cases/ is similar to Training and