Open Access Open Access  Restricted Access Subscription or Fee Access

Semantic Segmentation Using Fully Convolutional Network for An Autonomous Vehicle

Rohan Dandekar, Ashu Prasad, Sagar Devanpalli

Abstract


Numerous individuals inside the profound learning and PC vision networks comprehend what picture arrangement is. We would like to inform about our model that, what single object or scene is present within the image. Semantic segmentation is the most informative segmentation, where we wish to classify each and every pixel within the image. Deep networks have proved to be encoded with highlevel feature like semantic means which can deliver superior performance in salient object detection. Our key insight is to create “Semantic Segmentation using Fully Convolutional Network for an Autonomous Vehicle” where semantic segmentation is one of the key problems within the field of computer vision. Watching the larger picture, semantic segmentation is among one of the high-level tasks that paves the way towards complete scene understanding. The significance of scene understanding as a center PC vision issue is featured by the established truth that an expanding number of uses feed from deriving information from symbolism. A number of those applications include self-driving vehicles, human-computer interaction, geo-sensing etc. Self-driving vehicles require a profound comprehension of their environmental factors. To support this, fully convolutional networks classifies the road, pedestrians, cars, and sidewalks at pixel-level accuracy.

During this project, we develop a neural network and optimize it to perform semantic segmentation using fully convolutional network models.


Keywords


PC vision, Self-driving vehicles, Pixel-level accuracy, Geo-sensing, Human-computer interaction.

Full Text:

PDF

References


K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.

A. Krizhevsky, I. Sutskever, and G.E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.

J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML, 2014.

D. Eigen, D. Krishnan, and R. Fergus. Restoring an image taken through a window covered with dirt or rain. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 633–640. IEEE, 2013.

C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2013

R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition, 2014

M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2011 (VOC2011) Results. http://www.pascalnetwork.org/challenges/ VOC/voc– 2011/workshop/index.html.

B. Hariharan, P. Arbelaez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In Computer Vision and Pattern Recognition, 2015.

K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.

N. Zhang, J. Donahue, R. Girshick, and T. Darrell. Partbased r-cnns for fine-grained category detection. In Computer Vision–ECCV 2014, pages 834–849. Springer, 2014.

K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014.

F. Ning, D. Delhomme, Y. LeCun, F. Piano, L. Bottou, and P.E. Barbano. Toward automatic phenotyping of developing embryos from videos. Image Processing, IEEE Transactions on, 14(9):1360–1371, 2005.

P.H. Pinheiro and R. Collobert. Recurrent convolutional neural networks for scene labeling. In ICML, 2014


Refbacks

  • There are currently no refbacks.