|
|
The main goal of this work is to elucidate the impact that common aspects involved in the generation of rendered synthetic images may have on the performance of neural semantic segmentation tasks. Our study used a recent autonomous driving synthetic dataset as our main testbed, allowing us to investigate the effect of different approaches when modeling their geometric, material, and lighting details. We also studied the impact of rendering noise, typically produced by path-tracing algorithms, as well as the impact of using different color transformations and tone mapping algorithms. |