|本期目录/Table of Contents|

[1]胡江策,卢朝阳,李静,等.采用超像素标注匹配的交通场景几何分割方法[J].西安交通大学学报,2018,52(08):74-79.[doi:10.7652/xjtuxb201808012]
 HU Jiangce,LU Zhaoyang,LI Jing,et al.A Geometric Segmentation Method for Traffic Scenes Using SuperPixel Label Matching[J].Journal of Xi'an Jiaotong University,2018,52(08):74-79.[doi:10.7652/xjtuxb201808012]
点击复制

采用超像素标注匹配的交通场景几何分割方法(PDF)

《西安交通大学学报》[ISSN:0253-987X/CN:61-1069/T]

卷:
52
期数:
2018年第08期
页码:
74-79
栏目:
出版日期:
2018-08-10

文章信息/Info

Title:
A Geometric Segmentation Method for Traffic Scenes Using
SuperPixel Label Matching
作者:
胡江策卢朝阳李静邓燕子刘阳
西安电子科技大学通信工程学院,710071,西安
Author(s):
HU JiangceLU ZhaoyangLI JingDENG YanziLIU Yang
School of Telecommunications Engineering, Xidian University, Xi’an 710071, China
关键词:
交通场景超像素几何分割全连接条件随机场
Keywords:
traffic scene superpixel geometric segmentation fully connected conditional
random field
分类号:
TP391.41
DOI:
10.7652/xjtuxb201808012
摘要:
针对交通场景逐像素标注方法计算复杂、模型训练耗时长的问题,提出了一种基于超像素标注匹配的交通场景几何分割方法。该方法无需进行模型训练,根据全局特征搜索一组待分割交通场景图像的相似图像集。对待分割图像进行超像素分割和超像素块特征提取,并利用朴素贝叶斯原理进行似然比计算,根据似然比在相似图像集中进行超像素块标注匹配以实现初次分割。利用初次分割结果计算出一元势,应用全连接条件随机场模型对初次分割结果进行优化。实验结果表明,与传统的逐像素标注方法相比,本文方法的分割正确率和平均召回率分别提高了4%和3%,能够有效地实现交通场景几何分割。
Abstract:
A geometric segmentation method for traffic scenes based on superpixel label matching is proposed to solve the problem of complex calculation and long timeconsumption of model training in the pixelbypixel labeling method for traffic scenes. The proposed method does not require model training, and a set of images similar to picture a traffic scene that will be segmented is searched according to global features. Then, superpixel segmentation and superpixel block feature extraction are performed on the traffic scene, and the likelihood ratio is calculated using the naive Bayesian principle. Superpixel block label matching is performed in the set of similar images according to the likelihood ratio to realize an initial segmentation. Finally, the initial segmentation result is used to calculate the unary potential, and the fully connected conditional random field model is used to optimize the initial segmentation result. Experimental results and a comparison with the traditional pixelbypixel labeling method show that the proposed method effectively achieves geometric segmentation of traffic scenes, and the accuracy of segmentation and the average recall rate increase by 4% and 3% respectively.

参考文献/References:

[1]HOIEM D, EFROS A A, HEBERT M. Recovering surface layout from an image [J]. International Journal of Computer Vision, 2007, 75(1): 151172.
[2]LADICK L’, STURGESS P, ALAHARI K, et al. What, where and how many? combining object detectors and CRFs [C]∥Proceedings of the 11th European Conference on Computer Vision. Berlin, Germany: Springer, 2010: 424437.
[3]徐胜军, 韩九强, 何波, 等. 融合边缘特征的马尔可夫随机场模型及分割算法 [J]. 西安交通大学学报, 2014, 48(2): 1419.
XU Shengjun, HAN Jiuqiang, HE Bo, et al. A region Markov random field model with integrated edge feature and image segmentation algorithm [J]. Journal of Xi’an Jiaotong University, 2014, 48(2): 1419.
[4]邓燕子, 卢朝阳, 李静. 交通场景的多视觉特征图像分割方法 [J]. 西安电子科技大学学报(自然科学版), 2015, 42(6): 1116.
DENG Yanzi, LU Zhaoyang, LI Jing. Segmentation of the image with multivisual features for a traffic scene [J]. Journal of Xidian University, 2015, 42(6): 1116.
[5]COSTEA A D, NEDEVSCHI S. Semantic channels for fast pedestrian detection [C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2016: 23602368.
[6]COSTEA A D, NEDEVSCHI S. Fast traffic scene segmentation using multirange features from multiresolution filtered and spatial context channels [C]∥Proceedings of the IEEE Intelligent Vehicles Symposium. Piscataway, NJ, USA: IEEE, 2016: 328334.
[7]GEORGE M. Image parsing with a wide range of classes and scenelevel context [C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2015: 36223630.
[8]TIGHE J, LAZEBNIK S. Superparsing: scalable nonparametric image parsing with superpixels [J]. International Journal of Computer Vision, 2013, 101(2): 352365.
[9]YANG J, PRICE B, COHEN S, et al. Context driven scene parsing with attention to rare classes [C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2014: 32943301.
[10]SHELHAMER E, LONG J, DARRELL T. Fully convolutional networks for semantic segmentation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640.
[11]NGUYEN K, FOOKES C, SRIDHARAN S. Deep context modeling for semantic segmentation [C]∥Proceedings of the IEEE Winter Conference on Applications of Computer Vision. Piscataway, NJ, USA: IEEE, 2017: 5663.
[12]KRHENBHL P, KOLTUN V. Efficient inference in fully connected CRFs with Gaussian edge potentials [C]∥Proceedings of Advances in Neural Information Processing Systems. Cambridge, MA, USA: MIT Press, 2011: 109117.
[13]OLIVA A, TORRALBA A. Building the gist of a scene: the role of global image features in recognition [J]. Progress in Brain Research, 2006, 155: 2336.
[14]FELZENSZWALB P F, HUTTENLOCHER D P. Efficient graphbased image segmentation [J]. International Journal of Computer Vision, 2004, 59(2): 167181.
[15]MALISIEWICZ T, EFROS A A. Recognition by association via learning perexemplar distances [C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2008: 18.
[16]GRIDCHYN I, KOLMOGOROV V. Potts model, parametric maxflow and KSubmodular functions [C]∥Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ, USA: IEEE, 2013: 23202327.
[17]谭论正, 夏利民, 夏胜平. 基于多级Sigmoid神经网络的城市交通场景理解 [J]. 国防科技大学学报, 2012, 34(4): 132137.
TAN Lunzheng, XIA Limin, XIA Shengping. Urban traffic scene understanding based on multilevel sigmoidal neural network [J]. Journal of National University of Defense Technology, 2012, 34(4): 132137.
[18]LADICK L, RUSSELL C, KOHLI P, et al. Associative hierarchical random fields [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 36(6): 10561077.

备注/Memo

备注/Memo:
国家自然科学基金资助项目(61502364)
更新日期/Last Update: