Most deep learning-based segmentation methods focus on accurate pixel-wise classification, disregarding higher order properties such as region connectivity, boundary smoothness, or the number of non-differentiable points on the object contour. To address these issues, we propose a new parameter-based object segmentation framework. Specifically, we represent the target object's boundary with parametric curves to handle the higher-order properties and find the parameters through a convolutional neural network. We also introduce a novel silhouette loss to train the network, enabling efficient parameter-based contour fitting. The proposed silhouette loss is based on a differentiable renderer and is suitable for the segmentation task because it has the same property as IoU (intersection over union). Experimental results show that the proposed method yields object masks with desirable properties and achieves comparable performance to the state-of-the-art on various tasks.
This work was supported in part by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by Korea government (MSIT) (No. 2021-0-01062). This work was also supported by the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, Seoul National University in 2022.