Download PDFOpen PDF in browser

Semantic Segmentation with Peripheral Vision

EasyChair Preprint no. 4378

9 pagesDate: October 12, 2020


Deep convolutional neural networks exhibit exceptional performance on many computer vision tasks, including semantic segmentation. Pre-trained networks trained on a relevant and large benchmark have a notable impact on these successful achievements. However, confronting a domain shift, usage of pre-trained encoders can not boost the performance of a model. In general, transfer learning is not a universal solution for different fields of science with small accessible datasets. An alternative approach is to develop stronger network models applicable to any problem rather than forcing scientists to explore available encoders in other literature for their particular problems. To deviate the direction of the research trend in semantic segmentation toward more effective models, we proposed an innovative convolutional module simulating the peripheral ability of the human eye. By utilizing our module in an encoder-decoder configuration, after extensive experiments, we could achieve better outcomes on several challenging benchmarks, including PASCAL VOC2012 and CamVid.

Keyphrases: computer vision, deep learning, dilated convolution, image segmentation, peripheral vision, pre-trained model, semantic segmentation

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Mohammad Hamed Mozaffari Maaref and Won-Sook Lee},
  title = {Semantic Segmentation with Peripheral Vision},
  howpublished = {EasyChair Preprint no. 4378},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser