辅助视觉中的图像处理关键技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
辅助视觉导航技术是利用计算机视觉代替人眼对于周围景物环境进行感知,对感兴趣目标进行提取分割,对目标进行理解,并完成相应的视觉任务。目前,辅助视觉导航已经在盲人辅助视觉、视觉导航车辆和智能监控等领域得到广泛应用,在军事、航空航天等领域也得到了应用。但在实际应用中,仍存在一些问题。如在复杂环境中,存在大量冗余视觉信息时,如何快速准确地提取到用户感兴趣的目标,建立视觉处理通道,将运算集中在信息集中的感兴趣区域;在目标存在遮挡、旋转和出现相似目标时,现有的方法很难有效地完成跟踪和识别任务;在恶劣天气条件下,图像质量下降也同样会给后续的视觉任务增加困难。
     本文针对辅助视觉导航中的关键问题,对图像增强、目标检测、目标跟踪和目标识别等问题进行了深入研究,结合人眼视觉特性,实现了复杂环境中感兴趣目标提取,基于物理模型对雾霾、光照不均图像的快速增强,结合全局和局部特征在目标旋转、形变下进行有效识别,采用粒子滤波技术完成相似目标干扰、目标遮挡情况下跟踪等功能。最后,使用DaVinci系列DSP设计完成了一套空间非合作目标测量系统,具有体积小,可靠性高等优点,可以应用于导弹防御、海洋监控等其它领域。
     论文的主要工作及贡献归纳如下:
     1.给出了基于人眼视觉注意机制的目标检测和视觉信息处理通道模型。人类的视觉系统就是利用视觉注意机制,处理大量的视觉信息并及时做出反应。本文首先分析了人眼视觉特征,为视觉注意机制引入目标检测奠定了理论基础;提取视频图像中颜色、亮度和方向信息建立特征向量,并结合双目立体视觉的深度信息和光流场的运动向量,通过信息论的方式融合各通道特征图构建显著图。通过显著图可以更加有效地提取视频中的显著目标,建立视觉信息处理通道,通道内为感兴趣目标,通道外为背景信息,为后续的目标识别、目标跟踪奠定了基础。
     2.提出了一种基于双边滤波器的图像增强算法。恶劣天气下造成的图像质量下降,现有方法存在运算速度慢、光晕效应明显的问题。针对这一问题,对于有雾图像,根据大气物理散射模型,通过灰度腐蚀膨胀运算,去除了环境光估计中白色目标对场景恢复的影响;使用快速联合双边滤波算法,快速准确地计算大气幕V,并通过V获得透射率t,避免了普通双边滤波可能造成的光晕伪影。对于光照不均图像,将光照反射成像模型和物体的RGB色彩通道反射特性结合,提出了亮通道的概念,并从解析的角度给出光照分量的计算方式。通过快速联合双边滤波算法,准确得到了光照分量,最后通过光照反射成像模型求解目标的RGB通道反射系数。实验表明,本文算法恢复图像具有细节丰富、自然逼真、清晰度高等特点,相较其它增强方法,运算速度大幅提高,有利于算法的硬件实现。
     3.提出了一种结合局部和全局特征的目标识别方法。根据视觉生理学和心理学研究,人类视觉系统依赖于注意力选择机制,结合视觉记忆,能够快速的理解场景内容。Gist向量对于一幅场景图像可以快速地获取的全局意义上的场景要点语义,但对于具体目标缺少特征提取。视觉注意机制分割目标后,提取目标3D-SIFT特征点。局部特征针对图像中感兴趣区域,实现对目标细节的描述,并且使用3D-SIFT描述子满足目标旋转、缩放、光照变化等不变性。局部特征结合Gist全局信息,通过SVM支持向量机对特征向量进行训练。通过实验表明,该方法在目标识别和场景理解领域,均取得良好的识别效果。
     4.提出了一种基于模糊C均值聚类的粒子滤波方法。针对粒子滤波中出现多相似目标跟踪结果会发散和跟踪核函数窗宽固定的缺陷,以非线性、非高斯系统目标状态估计为主线,使用可变椭圆作为粒子区域,在粒子重要性重采样后,通过mean-shift算法获得每个目标的聚类中心,利用FCM聚类算法完成粒子聚类,获得相应目标的粒子子群,最后通过粒子子群估计各目标的最终状态并修正核窗口宽度。实验表明,与传统粒子滤波算法相比,该算法解决了传统粒子滤波的发散问题,对于目标发生旋转、尺度变化、遮挡等情况,能够准确地对多目标进行跟踪,减少了粒子数量,具有很好的实时性。
     5.综合视频编解码、网络通信、目标检测、立体视觉测量等技术,设计并实现了一套非合作目标位姿测量实验系统。该实验系统采用基于Hough变换的线特征提取与跟踪,实现对感兴趣区域准确定位;根据特征点坐标进行快速立体匹配与三维坐标计算,并建立目标坐标系,采用RANSAC算法提高相对位姿计算精度。系统硬件使用DaVinci系列DM6467T核心芯片,外围电路采用模块化设计。对非合作航天器自主交会过程模拟的半实物仿真,测量实验表明,相对位置测量精度小于±20mm,相对姿态测量精度小于±2°,测量速度能够达到10fps。系统主要特点:嵌入式设计,硬件实时检测跟踪,体积小、功耗低,可应用在航天器对接、海域、战场监视、视觉导航等领域。
Visual navigation aid technology substitutes computer vision for human eyes toperceive the surrounding environment, then extracts and understands interestedtargets, and accomplishes the corresponding visual task. At present, visual navigationaid has already been widely used in visual navigation aid for the blind, visualnavigation for vehicles, intelligent monitoring and other fields. Moreover, it has alsobeen applied in military, aerospace and other fields.The key technology of visualnavigation aid in image treatment includes image enhancement, target detection,target tracking, target identification and so on. There are some outstandingalgorithms for each of these treatments, however, problems still exist. For example,when large visual information occurs in complicated environment, how to extractclient-interested targets accurately and quickly, establish visual treatment channel,and compute the interested area in the information set; when the targets are shaded,blocked, rotated and there are similar targets, the present methods are hard tocomplete the tracking and identification tasks; under terrible weather condition,image quality reduction will also provide difficulty for the subsequent visual tasks.
     This paper made deep study on the key points of visual navigation aid, such asimage enhancement, target detection, target tracking, and target identification. Itcombined human visual properties to extract the interested targets in the complicatedenvironment, and fast enhance the halo and non-uniformed illumination imagesbased on physical model. Then, overall and local characteristics were used toeffectively identify the targets even when they were rotated or deformed. Andparticle filter technology was adopted to complete the tracking task even when therewere similar targets or the targets were blocked. Finally, DSP design of DaVinciseries was used to form a set of spacial non-cooperative target measuring system,which could be applied in fields such as missile defence, and ocean monitoring dueto its small size and high reliability.
     The main achievements of this paper are as follows:
     1. It studied human visual attention mechanism. Human visual system usesvisual attention mechanism to handle huge visual information and make timelyresponse. This paper raised a method to establish target detection and visualinformation handling channel based on human visual attention mechanism. It firstlyanalyzed human visual characteristics, which laid theoretical basis for visual attention mechanism to introduce target detection; Secondly, it extracted color,luminance and direction information from the images to build feature vectors,combined them with binocular stereo visual motion vectors of depth information andoptical flow field, and established saliency map by fusing saliency maps of eachchannel through information theory. Then, salient targets could be more effectivelyextracted from the video through saliency map to build visual information treatmentchannel. Interested targets stayed inside the channel, while outside were backgroundinformation. This laid basis for the subsequent target detection and tracking.
     2. When comes to the quality-deducted images due to terrible whethercondition, the current methods are slow in computation and the results are withobvious halo. To solve the problems, this paper raised an image enhancementalgorithm based on bilateral filter. As for foggy images, and according toatmospheric physics scattering model, this paper eliminated the influence of whitetargets in the ambient light to the scene restoration through grayscale-erosion-dilationoperation; then used fast joint bilateral filter algorithm to quickly and accuratelywork out atmospheric veil V, gained transmissivity t through V, which avoided thehalo artifacts possibly resulted by ordinary bilateral filtering. As for non-uniformedillumination images, this paper combined light reflection imaging model with objectRGB color channel reflection characteristics to raise a bright channel concept, andgave the computation method of illumination components from an analytical point ofview. Accurate illumination components were gained through fast joint bilateral filteralgorithm. Finally, target RGB channel reflection coefficient was solved by lightreflection imaging model. Experiments proved that the images restored by thisalgorithm were full of details, natural, vivid, and clearly visible. Compared withother enhancement methods, the speed of this algorithm was accelerated, which wasbeneficial for the hardware implementation.
     3. According to visual physiological and psychological studies, human visualsystem relies on attention selection mechanism, which combines with visualmemories to fast comprehend scene contents. According to visual perceptionmechanism, this paper combined overall characteristics with local ones to identifythe targets and comprehend scene. Gist vector can quickly gain overall scene keypoint semantics from a scene image, but it cannot extract feature characteristics ofspecific targets in the scene image. Visual attention mechanism extracted target3D-SIFT feature points and combined them with Gist overall information aftersegmenting target, and trained feature vectors by supporting vector machine (SVM). Experiments proved that this method had gained excellent results in targetidentification and scene comprehension.
     4. When there are multi-similar-targets, tracking results of particle filter willdiverge and tracking kernel function window width is fixed. To solve the problems,this paper raised a particle filtering method based on vague C mean value clustering.It made non-linear, non-Gaussian system target state estimation as mainline, usedvariable ellipse as particle region, gained clustering center of each target throughmean-shift algorithm after particle importance resampling. Then, FCM clusteringalgorithm was used to complete particle clustering and get corresponding targetparticle subblock. At last, the finally state of each target was estimated throughparticle subblock and kernel window width was corrected. Experiments proved that,compared with traditional particle filter algorithm, this algorithm solved thedivergent problem existed in traditional particle filter, and reduced particle quantity.It can real-timely and accurately track multi-targets even when the targets are rotated,blocked or changed in scale.
     5. This paper synthesized technologies such as video coding and decoding,network communication, target detection, and stereo visual measuring, designed andrealized a set of non-cooperative target position measurement system. This systemextracted, tracked, and accurately positioned the interested region based on Houghtransform linear characteristics; then made fast stereo match and3D coordinatecalculation according to feature point coordinates, established target coordinatesystem, and enhanced the accuracy of relative pose calculation by using RANSACalgorithm. This system used DM6467T core chip of DaVinci series as hardware, usedmodular design in the peripheral circuit. Simulative semi-physical simulationexperiment of non-cooperative spacecraft autonomous rendezvous proved that therelative position measuring accuracy of this algorithm was within±20mm, relativeattitude measuring accuracy was within±2°, and the measuring speed reached10fps. The advantages of this system were: embedded design, hardware real-timedetection and tracking, small in size, low power consumption, it can be applied infields such as spacecraft docking, ocean and battle field monitoring, and visualnavigation.
引文
[1] L. Itti, C. Koch. Computational modeling of visual attention [J]. NatureReviews Neuroscience,2001,2(3), pp:194-203.
    [2] V. Navalpakkam, L. Itti. Modeling the influence of task on attention [J].Vision Research,2005,45(2), pp:205-231.
    [3] T. Nothdurft, P. Hecker, S. Ohl, F. Saust, M. Maurer, A. Reschka, and J. R. B.Ohmer. First fully autonomous test drives in urban traffic [C].14thInternational IEEE Conference on Intelligent Transportation Systems (ITSC),2011, pp:919–924.
    [4] C. Held, J. Krumm, P. Markel, R. P. Schenke. Intelligent video surveillance[J]. Computer,2012,45(3), pp:83-84.
    [5] Jie Hou, Baolong Guo. A novel CBCD method based on3-dimensional SIFTdescriptors in dynamic tunnels [C]. Applied Mechanics and Materials Vols,2012, pp:1072-1077.
    [6] Xinwei Li, Baolong Guo, Fanjie Meng, and Leida Li. A novel fingerprintingalgorithm with blind detection in DCT domain for images [J]. InternationalJournal of Electronics and Communications,2011,65(11), pp:942-948.
    [7]章毓晋.图像工程(第二版)[M].北京:清华大学出版社,2007.
    [8] H. G. Adelmann. Butterworth equations for homomorphic filtering of images[J]. Computers in Biology and Medicine,1998,28(2), pp:169-181.
    [9] C. Stauffer, WEL Grimson. Learning patterns of activity using real-timetracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(8), pp:747-757.
    [10] I Haritaoglu, D Harwood, L Davis S. real-time surveillance of people andtheir activities [J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,2000,22(8), pp:809-830.
    [11] S. J. Mckenna, S. Jabri, et al. Tracking groups of people [J]. Computer Visionand Image Understanding,2000,80(1), pp:42-56.
    [12] C. Stauffer, WEL Grimson. Adaptive background mixture model forreal-time tracking [C]. Proceedings of the IEEE Computer SocietyConference on Computer Vision and Pattern Recognition,1999,2, pp:246-252.
    [13] H. Fujiyoshi, T. Kanade. VSAM: Video surveillance and monitoring project[J]. Journal of the Institute of Image Information and Televison Engineer,2003,57(9), pp:1068-1072.
    [14] A. Prati, I. Mikic, M. M. Trivedi et al. Detecting moving shadows:Algorithms and evaluation [J]. IEEE Transacitons on Pattern Analysis andMachine Intelligence,2003,25(7), pp:918-923.
    [15] R. Achanta, F. Estrada, and P. Wils. Salient region detection andsegmentation [C]. International Conference on Computer Vision System(ICCV),2008, pp:66-75.
    [16] R. Achanta, S. Hemami, and F. Estrada. Frequrncy-tuned salient regiondetection [C]. IEEE Confreence on Computer Vision and Pattern Recognition(CVPR).2009, pp:1597-1604.
    [17] L. Liyuan, Maylor. Integrating intensity and texture differences for robustchange detection. IEEE Trans. Image Process.2002, Vol.11, No.2, pp:105-112.
    [18] I. Leichte, M. Lindenbaum, and E. Rivlin. Mean shift tracking with multiplereference color histograms [J]. Computer Vision and Image Understanding,2010,114(3), pp:400-408.
    [19] D. Comaniciu, P. Meer. Mean shift analysis and applications [C]. IEEEInternational Conference on Computer Vision (ICCV),1999, Vol.2, pp:1197-1203.
    [20] I. Sethi, R. Jain. Finding trajectories of feature points in a monocular imagesequence [J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,1987,9(1), pp:56-73.
    [21] R. Collins, Y. Liu. On-line selection of discriminative tracking features [C].IEEE International Conference on Computer Vision (ICCV),2003, pp:346-352.
    [22] J. Shi, C. Tomasi. Good features to track [C]. IEEE Conference on ComputerVision and Pattern Recognition (CVPR),1994, pp:593-600.
    [23] M. Kass, A. Witkin, and D. Terzopoulos. Snakes: active contour models [J].International Journal of Computer Vision,1987,1(4), pp:321-331.
    [24] A. Fernández-Caballero, J. Castillo et al. Optical flow or image subtraction inhuman detection from infrared camera on mobile robot [J]. Robotics andAutonomous Systems,2010,58(12), pp:1273-128.
    [25] A. Bruhn, J. Weickert, and C. Schn rr. Lucas/Kanade meets Horn/Schunck:combining local and global optic flow methods [J]. International Journal ofComputer Vision,2005,61(3), pp:211–231.
    [26] S. Vishwakarma, A. Agrawal. A survey on activity recognition and behaviorunderstanding in video surveillance [J]. The Visual Computer,2012,29(9),pp:1-27.
    [27] KEA van de Sande, T. Gevers, CGM Snoek. Evaluating color descriptors forobject and scene recognition [J]. IEEE Transactions on Pattern Analysis andMachine Intelligence,2010,32(9), pp:1582-1596.
    [28] BV Funt, GD Finlayson. Color constant color indexing [J]. Pattern Analysisand Machine,1995,17(5), pp:522-529.
    [29] H. Tamura, S. Mori, T. Yamawaki. Textural features corresponding to visualperception [J]. IEEE Transactions on Systems Man and Cybernetics,1978,8(6), pp:460-473.
    [30] W. Y. Ma, B.S. Manjunath. Texture features and learning similarity [C]. IEEEConfreence on Computer Vision and Pattern Recognition (CVPR),1996, pp:425-430.
    [31] N. Takahashi, M. Iwasaki, and T. Kunieda. Image retrieval using spatialintensity features [J]. Signal Processing: Image Communication,2000,16(1),pp:45-57.
    [32] D. G. Lowe. Local feature view clustering for3D object recognition [C].Computer Vision and Pattern Recognition (CVPR),2001, pp:682-688.
    [33] P. Azad, T. Asfour, R. Dillmann. Combining Harris interest points and theSIFT descriptor for fast scale-invariant object recognition [C]. IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS),2009,pp:4275-4280.
    [34] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski. ORB: an efficientalternative to SIFT or SURF [C]. IEEE International Conference onComputer Vision (ICCV),2011, pp:2564-2571.
    [35] N. Bayramoglu, A. Alatan. Shape index SIFT: range image recognition usinglocal features [C].20th International Conference on Pattern Recognition(ICPR),2010, pp:352-355.
    [36] K. Barnard, P. Duygulu, D. Forsyth. Matching words and pictures [J]. TheJournal of Machine Learning Research,2003,3(3), pp:1107-1135.
    [37] L. Li, H. Su, Y. Lim, F. Li. Objects as Attributes for Scene Classification [C].Trends and Topics in Computer Vision,2012, Workshops, pp:57-69.
    [38] World Health Organization (2009) Visual impairment and blindnesshttp://www.who.int/mediacentre/factsheets/fs282/en/index.html
    [39] D. Dimitrios, G. B. Nikolaos. Wearable obstacle avoidance electronic travelaids for blinds: a survey [J]. IEEE Transactions on System, Man andCybernetics.2010,40, pp:25-35.
    [40] I. Ulrich, J. Borenstein. The guidecane–applying mobile robot technologiesto assist the visually impaired people [J]. IEEE Transactions on Systems,Man and Cybernetics Part A: Systems and Humans,2001,31(2), pp:131–136.
    [41] S. Meers, K. Ward. A substitute vision system for providing3D perceptionand GPS navigation via electro-tactile stimulation [C].1st InternationalConference on Sensing Technology,2005, pp:21–23.
    [42]徐洁,方志刚,鲍福良,张丽红. AudioMan:电子行走辅助系统的设计与实现[J].中国图象图形学报,2007,12(7), pp:1249-1253.
    [43] P. B. L. Meijer. An experimental system for auditory image representations[J]. IEEE Transactions on Biomedical Engineering,1992,39(2), pp:112–121.
    [44] G. Sainarayanan, R. Nagarajan, and S. Yaacob. Fuzzy image processingscheme for autonomous navigation of human blind [J]. Applied SoftComputing,2007,7(1), pp:257-264.
    [45] D. Dakopoulos, N. Bourbakis. Preserving visual information in lowresolution images during navigation for visually impaired [C].1stInternational Conference on PErvasive Technologies Related to AssistiveEnvironments,2008, pp:15–19.
    [46] S. Baten, M. Lutzeler, E. D. Dickmanns, et al. Techniques for autonomous,off-road navigation [J]. IEEE Trans on Intelligent Systems and TheirApplications,1998,13(6), pp:57-65.
    [47] M. Bertozzi, A. Broggi, A. Coati, R. I. Fedriga. A13,000km intercontinentaltrip with driverless vehicles: the VIAC experiment [J]. IEEE IntelligentTransportation Systems Magazine,2013,5(1), pp:28-41.
    [48] http://en.wikipedia.org/wiki/Google_driverless_car
    [49] J. Gonzàlez, T. B. Moeslund, L. Wang. Semantic understanding of humanbehaviors in image sequences: from video-surveillance tovideo-hermeneutics [J]. Computer Vision and Image Understanding,2012,116(3), pp:305-306.
    [50] C. Chen, Y. Yao, A. Drira, et al. Cooperative mapping of multiple PTZcameras in automated surveillance systems [C]. IEEE Confreence onComputer Vision and Pattern Recognition (CVPR),2009, pp:1078-1084.
    [51] Y. Chen, B. Wu, H. Huang, et al. A real-time vision system for nighttimevehicle detection and traffic surveillance [J]. IEEE Transaction on IndustrialElectronics,2011,58(5), pp:2030-2044.
    [52] C. Rougier, J. Meunier, A. St-Arnaud, J. Rousseau. Robust video surveillancefor fall detection based on human shape deformation [J]. IEEE Transactionson Circuits and Systems for Video Technology,2011,21(5), pp:611-622.
    [53] R. Kumar, H. Sawhney, S. Samarasekera, et al. Aerial video surveillance andexploitation [J]. Proceedings of the IEEE,2001,89(10), pp:1518-1539.
    [54] R. Desimone, J. Duncan. Neural mechanisms of selective visual attention [J].Annual review of neuroscience,1995,18(1), pp:193-222.
    [55] H. Wiesel. Receptive fields, binocular interaction and functional architecturein the cat's visual cortex [J]. The Journal of Physiology,1962,160(1), pp:106-154.
    [56] M.S. Livingstone. Art, illusion and the visual system [J]. Scientific American,1988,258(1), pp:78-85.
    [57]罗四维.视觉信息认知计算理论(第一版)[M].北京:科学出版社,2010.
    [58] J. J. Atick, A. N. Redlich. What does the retina know about natural scenes?Neural Computation,1992,4(2), pp:196-210.
    [59] R. W. Rodieck, J. Stone. Analysis of receptive fields of cat retinal ganglioncells [J]. Journal of Neurophysiolgy,1965,28(5), pp:833-849.
    [60] F. Attneave. Some informational aspects of visual perception [J].Psychological Review,1954,61(3), pp:183-193.
    [61] A. Treisman, G. Gelade. A feature integration theory of attention [J].Cognitive Psyehology.1980, pp:97-136.
    [62] C. Koch, S. Ullman. Shifts in selective visual attention: towards theunderlying neural circuitry [J]. Hum Neurobiol,1985,4(4), pp:219-227.
    [63] L. Itti, C. Koch, and E. Niebur. A model of saliency based visual attention forrapid scene analysis [J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,1998,20(11), pp:1254-1259.
    [64] L. Itti, C. Koch. A saliency-based search mechanism for overt and covertshifts of visual attention [J]. Vision Research,2000,40(10), pp:1489-1506.
    [65] D. Walther, L. Itti, and C. Koch. Saliency Toolbox.http://www.saliencytoolbox.net,2008.11.
    [66] J. Harel, C. Koch, and P. Perona. Graph-based visual saliency [C]. Advancesin Neural Information Processing Systems,2006, pp:545-552.
    [67] V. Gopalakrishnan, Y. Hu, and D. Rajan. Rando walks on graphs to modelsaliency in images [C]. IEEE Conference on Computer Vision and PatternRecognition (CVPR),2009, pp:1698-1705
    [68] X. Hou, L. Zhang. Saliency Detection: A spectral residual approach [C].IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2007, pp:1-8.
    [69] B. Schauerte, R. Stiefelhagen. Predicting human gaze using quaternion DCTimage signature saliency and face detection [C]. IEEE Workshop onApplications of Computer Vision (WACV),2012, pp:137-144.
    [70] J. Yu, H. Chung, and H. Hahn. Walking assistance system for sight impairedpeople based on a multimodal information transformation technique [C].ICROS-SICE International Joint Conference,2009, pp:1639-1643.
    [71] S. Naoki. Clustering and tracking of obstacles using stereo system [C].ICROS-SICE International Joint Conference,2009, pp:4623-4628.
    [72] W. Andreas, S. Thomas, et al. Fast obstacle segmentation in monocular video[J]. Pattern Recognition,2007,4713, pp:264-273.
    [73]李志强.视觉显著性模型研究及其在影像处理中的应用[D].上海交通大学博士学位论文,2009.
    [74]钱晓亮,郭雷,韩军伟等.一种基于加权稀疏编码的频域视觉显著性检测算法[J].电子学报,2013,41(6), pp:1159-1165.
    [75] X. Hou Xiaodi, J. Harel, C. Koch. Image signature: Highlingting sparsesalient regions [J]. IEEE Transactions on Pattern Analysis and MachineIntelligence.2012,34(1). pp:194-201.
    [76]马颂德,张正友.计算机视觉——计算理论与算法基础[M].北京:科学出版社,1998.
    [77] BKP Horn, BG Schunck. Determining optical flow [J]. Artificial Intelligence1981,17, pp:185-203.
    [78] B. D. Lucas, T. Kanade. An iterative image registration technique with anapplication to stereo vision [C]. IJCAI,1981, pp:121–130.
    [79] H. Zabrodsky, S. Peleg. Attentive transmission [J]. Journal of VisualCommunication and Image Representation.1(2), pp:189-198.
    [80] X. Xiao, H. Hu, Z. Cai, et al. Fast obstacle detection using adaptive imagesegmentation and stereo vision [J]. Application Research of Computers,2007,24(9), pp:182-188.
    [81] R. Maini, H. Aggarwal. A comprehensive review of image enhancementtechniques [J]. Journal of Computing,2010,2(3), pp:8-13.
    [82] Y. Zhang, L. Wu, S. Wang, and G. Wei. Color image enhancement based onHVS and PCNN [J]. Science China Information Sciences,2010,53(10), pp:1963-1976.
    [83] S. S. Bagade, V. K. Shandilya. Use of histogram equalization in imageprocessing for image enhancement [J]. International Journal of SoftwareEngineering Research and Practices,2011,1(2), pp:10-13.
    [84] T. K. Kim, J. K. Paik, and B. S. Kang. Contrast enhancement system usingspatially adaptive histogram equalization with temporal filtering [J]. IEEETransactions on Consumer Electronics,1998,44(1), pp:82-86.
    [85]杨词银,黄廉卿.基于幂函数的加权自适应直方图均衡[J].光电子激光,2002,13(5), pp:515-517.
    [86] R. Fries, J. Modestino. Image enhancement by stoehastic homomorphicfiltering [J]. IEEE Transactions on Aaeousties, Speeeh, and Signal Processing,1979,27(6), pp:625-637.
    [87] J. Xiao, S. P. Song, and L. J. Ding. Research on the fast algorithm of spatialhomomorphic filtering [J]. Journal of Image and Graphics,2008,13(12), pp:2302-2305.
    [88] G. G. Bhutada, R. S. Anand, and S. C. Saxena. Edge preserved imageenhancement using adaptive fusion of images denoised by wavelet andcurvelet transform [J]. Digital Signal Processing,2011,21(1), pp:118-130.
    [89] S. Kim, W. Kang, E. Lee, and J. Paik. Wavelet-domain color imageenhancement using filtered directional bases and frequency-adaptiveshrinkage [J]. IEEE Transactions on Consumer Electronics,2010,56(2), pp:1063-1070.
    [90] S. Fang, Y. Wang, Y. Cao, et al. Restoration of image degraded by haze [J].Acta Electronica Sinica,2010,38(10), pp:2279-2284.
    [91] R. T. Tan. Visibility in bad weather from a single image [C]. IEEEConference on Computer Vision and Pattern Recognition (CVPR),2008, pp:1-8.
    [92] R. Fattal. Single image dehazing[C]. ACM Transactions on Graphics2008,pp:1-9.
    [93] K. He, J. Sun, and X. Tang. Single image haze removal using dark channelprior [C]. IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2009, pp:1956-1963.
    [94] E. H. Land, J. J. McCann. Lightness and retinex theory [J]. Journal of theOptical Society of America,1971,61(1), pp:1-11.
    [95] E. H. Land. The Retinex theory of color vision [J]. Scientfic American,1977,237(6), pp:108-129.
    [96] D. J. Jobson, Z. Rahman, and G. A. Woodell. Properties and performance of acenter/surround Retinex [J]. IEEE Transactions on Image Processing,1997,6(3), pp:451-462.
    [97] D. J. Jobson, Z. Rahman, and G. A. Woodell. A multi-scale Retinex forbridging the gap between color images and the human observation of scenes[J]. IEEE Transactions on Image Processing,1997,6(7), pp:965-976.
    [98] Z. Rahman, D. J. Jobson, and G. A. Woodell. Retinex processing forautomatic image enhancement [J]. Journal of Electronic Imaging,2004,13(1), pp:100-110.
    [99] R. Kimmel, M. Elad, and I. Sobel. A variational framework for Retinex [J].International Journal of Computer Vision,2003,52(1), pp:7-23.
    [100] C. Tomasi, R. Manduchi. Bilateral filtering for gray and color images [C].6th IEEE International Conference on Computer Vision (ICCV),1998, pp:839-846.
    [101] S. K. Nayar, S. G. Narasimhan. Vision in bad weather [C].7th IEEEInternational Conference on Computer Vision (ICCV),1999, pp:820-827.
    [102] R. C. Gonzalez, R. E. Woods.数字图像处理(第二版)[M].北京:电子工业出版社,2007, pp:39-40.
    [103] J. P. Tarel, N. Hautiere. Fast visibility restoration from a single color or graylevel image [C]. Proceedings of the12th IEEE International Conference onComputer Vision (ICCV),2009, pp:2201-2208.
    [104] N. Hautiere, J. P. Tarel, D. Aubert, et a1.Blind contrast enhancementassessment by gradient ratioing at visible edges [J].Image Analysis andStereology Journal,2008,27(2), pp:87-95.
    [105] R. Kohler. A segmentation system hased on thresholding [J]. ComputerGraphics and Image Processing,1981,15(4), pp:319-338.
    [106] A. Quattoni, A. Torralba. Recognizing indoor scenes [C]. IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR),2009, pp:413-420.
    [107] L. Li, F. Li. What, where and who? Classifying events by scene and objectrecognition [C]. IEEE11th International Conference on Computer Vision(ICCV),2007, pp:1-8.
    [108] F. Li, R. Fergus, and P. Perona. Learning generative visual models from fewtraining examples: an incremental Bayesian approach tested on101objectcategories [J]. Computer Vison and Image Understanding,2007,106(1), pp:59-70.
    [109] A. D. Holub, M. Welling, and P. Perona. Combining generative models andFisher Kernels for object class recognition [C]. International Conference onComputer Vision (ICCV),2005, pp:136-143.
    [110] G. Wang, Y. Zhang, and F. Li. Using dependant regions or objectcategorization in a generative framework [C]. IEEE Conference on ComputerVision and Pattern Recognition (CVPR),2006, pp:1597-1604.
    [111] H. Zhang, A. Berg, M. Maire, and J. Malik. SVM-KNN: discriminativenearest neighbor classification for visual category recognition [C]. IEEEConference on Computer Vision and Pattern Recognition (CVPR),2006, pp:2116-2136.
    [112] E. Nowak, F. Jurie, and B. Triggs. Sampling strategies for bag-of-featuresimage classification [C].9th European Conference on Computer Vision,2006, pp:490-503.
    [113] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: spatialpyramid matching for recognizing natural scene categories [C]. IEEEConference on Computer Vision and Pattern Recognition (CVPR),2006, pp:2169-2178.
    [114] A. Oliva, A. Torralba. Modeling the shape of the scene: a holisticrepresentation of the spatial envelope [J]. International Journal of ComputerVision,2001,42(3), pp:145-175.
    [115] C. Siagian, L. Itti. Rapid biologically-inspired scene classifcation usingfeatures shared with visual attention [J]. Pattern Analysis and MachineIntelligence,2007,29(2), pp:300-312.
    [116] D. G. Lowe. Object recognition from local scale-invariant features [C].7thInternational Conference on Computer Vision (ICCV),1999, pp:1150-1157.
    [117] D. G Lowe. Distinctive image features from scale-invariant keypoints [J].International Journal of Computer Vision,2004,60(2), pp:911-110.
    [118] H. Bay, T. Tuytelaars, and L. Van Gool. SURF: speeded up robust features[C].9th European Conference on Computer Vision (ECCV),2006, pp:404-417.
    [119] P. Scovanner, S. Ali, and M. Shah. A3-dimensional SIFT descriptor and itsapplication to action recognition [C].15th International Conference onMultimedia,2007, pp:357-360.
    [120] G. Csurka, C. Dance, and L. Fan. Visual categorization with bags ofkeypoints [C]. European Conference on Computer Vision (ECCV),2004, pp:1-22.
    [121] C. Cortes, V. Vapnik. Support-vector network [J]. Machine Learning,1995,20(3), pp:273-297.
    [122] C. Chang, C. Lin. LIBSVM: a library for support vector machines [J]. ACMTransactions on Intelligent Systems and Technology,2011,2(3), pp:27.
    [123] A. Bosch, A. Zisserman, X. Muoz. Scene Classification using a hybridgenerative/discriminative approach [J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,2008,30(4), pp:712-727.
    [124] G. Griffin, A. Holub, and P. Perona. Caltech-256object category dataset.2007, http://www.vision.caltech.edu/Image_Datasets/Caltech256/
    [125] R. Fergus, P. Perona, and A. Zisserman. Object class recognition byunsupervised scale-invariant learning [C]. IEEE Conference on ComputerVision and Pattern Recognition (CVPR),2003, pp:264-271.
    [126] http://sundatabase.mit.edu/
    [127] J. Niebles, H. Wang, and F. Li. Unsupervised learning of human actioncategories using spatial-temporal words [J]. International Journal ofComputer Vision,2008,79(3), pp:299-318.
    [128] C. Siagian, L. Itti. Rapid Biologically-Inspired Scene Classification UsingFeatures Shared with Visual Attention [J]. IEEE Transactions on PatternAnalysis and Machine Intelligence.2007,29(2), pp:300-312.
    [129] M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp. A tutorial onparticle filters for online nonlinear/non-Gaussian Bayesian tracking [J].IEEE Transactions on Signal Processing,2002,50(2), pp:174-188.
    [130] Y. Alper, J. Omar, and S. Mubarak. Object tracking: a survey [J]. ACMComputing Surveys,2006,38(4), pp:1-45.
    [131] C. Shana, T. Tanb, and Y. Weib. Real-time hand tracking using a mean shiftembedded particle filter [J]. Pattern Recognition,2007,40(7), pp:1958-1970.
    [132] H. Zhou, Y. Yuan, and C. Shi. Object tracking using SIFT features and meanshift [J]. Computer Vision and Image Understanding,2009,113(3), pp:345-352.
    [133] F. Gustafsson, F. Gunnarsson, N.Bergman, and U. Forssell. Particle filters forpositioning, navigation, and tracking [J]. IEEE Transactions on SignalProcessing,2002,50(2), pp:425-437.
    [134] NJ Gordon, DJ Salmond, and AFM Smith. Novel approach to nonlinear/non-Gaussian Bayesian state estimation [C]. IEEE Proceedings F (Radar andSignal Processing),1993,140(2), pp:107-113.
    [135] M. Jaward, L. Mihaylova, and N. Canagarajah. Multiple object trackingusing particle filters [C]. Proceedings of the IEEE Aerospace Conference,2006, pp:8-16.
    [136] Y. Rui, Y. Chen. Better proposal distributions: object tracking usingunscented particle filter [C]. IEEE Conference on Computer Vision andPattern Recognition (CVPR),2001, pp:786-793.
    [137] A. Wan, R. Merwe. The unscented kalman filter for nonlinear estimation [C].Proceedings of International Symposium on Adaptive Systems for SignalProcessing, Communications and Control,2000, pp:153-158.
    [138]孙伟,郭宝龙.一种伪粒子滤波的多目标跟踪方法[J].西安电子科技大学学报,2008,35(2), pp:248-253.
    [139] W. Sun, X. Zhang, and Y. Yan. An objects detecting and tracking methodbased on MSPF and SVM [J]. International Journal of Soft Computing andSoftware Engineering,2012,2(2), pp:9-16.
    [140] A. Kong, J. Liu, and W. Wong. Sequential imputations and Bayesian missingdata problems [J]. Journal of the American Statistical Association,1994,89(425), pp:278-288.
    [141] NJ Gordon, DJ Salmond, and AFM Smith. Novel approach tononlinear/non-Gaussian Bayesian state estimation [J]. Radar and Signal Processing,IEE Proceedings F,1993,140(2), pp:107-113.
    [142] J. Liu, R. Chen. Sequential Monte Carlo methods for dynamic systems [J].Journal of the American Statistical Association,1998,93(443), pp:1032-1044.
    [143] M. Bolic, P. Djuric, and H. Sangjin. New resampling algorithms for particlefilters [C]. IEEE International Conference on Acoustics, Speech, and SignalProcessing (ICASSP),2003, pp:589-592.
    [144] M. Bolic, P. Djuric, and S. Hong. Resampling algorithms for particle Filters:A Computational Complexity Perspective [J]. EURASIP Journal on AppliedSignal Processing,2004,15, pp:2267-2277.
    [145] M. Bolic, P. Djuric, and H. Sangjin. Resampling algorithms and architecturesfor distributed particle filters [C]. IEEE Transactions on Signal Processing,2005, pp:2442-2450.
    [146] K. Fukanaga, L. Hostetler. The estimation of the gradient of a densityfunction, with applications in pattern recognition [J]. IEEE Transactions onInformation Theory, l975,2l (1), pp:32-40.
    [147] Y. Cheng. Mean shift, mode seeking and clustering [J].IEEE Transactions onPattern analysis And Machine Intelligence,1995, l7(8), pp:790-799.
    [148] P. Thitimajshima. A new modified fuzzy c means algorithm for multispectralsatellite images segmentation [C]. Proceedings of Geosciences and RemoteSensing,2000, pp:1684-1686.
    [149]高新波,裴继红,谢维信.模糊c-均值聚类算法中加权指数m的研究[J].电子学报,2000,28(4), pp:80-83.
    [150] K. Nummiaro, M. E. Koder, and L. V. Gool. An adaptive color-based particlefilter [J]. Image and Vision Computing,2003,21(1), pp:91-110.
    [151] C. Tian, G. Ge, and Z. Shen. Mean-shift tracking algorithm base on FCM [J].Journal of Computer Applications,2009,29(12), pp:3332-3335.
    [152] N. Inaba, T. Nishimaki, M. Asano, et al. Rescuing a stranded satellite inspace-experimental study of satellite captures using a space manipulator [C].Proceedings of the IEEE RSJ International Conference on Intelligent Robotsand Systems,2003, pp:3071-3076.
    [153] F. Aghili. Optimal control of a space manipulator for detumbling of a targetsatellite [C]. IEEE International Conference on Robotics and Automation(ICRA),2009, pp:3019-3024.
    [154] K. Yoshida, D. Dimitrov, H. Nakanishi. On the Capture of Tumbling Satelliteby a Space Robot [C]. IEEE/RSJ International Conference on IntelligentRobots and Systems,2006, pp:4127-4132.
    [155] W. Xu, B. Liang, C. Li, et al. Autonomous rendezvous and robotic capturingof non-cooperative target in space [J]. Robotica,2010,28(5), pp:705-718.
    [156]崔乃刚,王平,郭继峰等.空间在轨服务技术发展综述[J].宇航学报,2007,28(4), pp:805-811.
    [157] A. Cropp, P. Palmer, P. Mclauchlan. Estimating pose of known target satellite[J]. Electronics Letters,2000,36(15), pp:1331-1332.
    [158] P. Jasiobedzki, M. Abraham, P. Newhook, et al. Model based pose estimationfor autonomous operations in space [C]. Proceeding of InternationalConference on Information Intelligent and Systems,1999, pp:211-215.
    [159] C. Tomasi, T. Kanade. Shape and motion from image streams underorthography: a factorization method [J]. International Journal of ComputerVision.1992,9(2), pp:137-154.
    [160] Y. Xie, Q. Zhan. New method to measure the position and orientation ofobject using multiple sensor fusing technology [J]. Acta Aeronautica etAstronautica Sinica,2002,23(5), pp:483-486.
    [161]艾莉莉.基于线阵CCD的空间目标外姿态测量关键技术研究[D].哈尔滨工业大学博士研究生学位论文,2009.
    [162]张庆君,胡修林,叶斌,等.基于双目视觉的航天器间相对位置和姿态的测量方法[J].宇航学报,2008,29(1), pp:156-161.
    [163]徐文福,刘宇,梁斌,等.非合作航天器的相对位姿测量[J].光学精密工程,2009,17(7), pp:1570-1580.
    [164]钱山.在轨服务航天器相对测量及姿态控制研究[D].国防科技大学博士研究生学位论文,2009.
    [165] S. Zhang, F. Liu, X. Cao, and L. He. Monocular vision-based two-stageiterative algorithm for relative position and attitude estimation of dockingspacecraft [J]. Chinese Journal of Aeronautics,2010,23(2), pp:204-210.
    [166]刘晓丽,徐科军,高学海,朱志海.一种非合作目标矩形面的判定和顶点提取方法[J].合肥工业大学学报(自然科学版),2008,31(9), pp:1358-1360.
    [167]罗桂娥.双目立体视觉深度感知与三位重建若干问题研究[D].中南大学博士学位论文,2012.
    [168] M. J. McDonnell. Box-filtering techniques [J]. Computer Graphics andImage Processing,1981,17(1), pp:65–70.
    [169] L. D. Stefano, M. Marchionni, and S. Matoccia, A fast area-based stereomatching algorithm [J]. Image and Vision Computing.2004,22(12), pp:983-1005.
    [170] B. Luo, Y. Wang, and Y. Liu. Prediction and estimation of pose accuracy oftwo-ocular optical tracker [J]. Chinese Journal of Scientific Instrument,2010,31(1), pp:194-199.
    [171] W. Sun, L. Chen, and B. Hu. Binocular vision-based position determinationalgorithm and system [C]. International Conference on Computer DistributedControl and Intelligent Environmental Monitoring (CDCIEM),2012, pp:170-173.
    [172] MA Fischler, RC Bolles. Random sample consensus: a paradigm for modelfitting with applications to image analysis and automated cartography [J].Communications of the ACM,1981,24(6), pp:381-395.
    [173] O. Chum, J. Matas. Optimal randomized RANSAC [J]. IEEE Transactions onPattern Analysis and Machine Intelligence,2008,30(8), pp:1472-1482.
    [174]龚声蓉,赵万金,刘纯平.基于视差梯度约束的匹配点提纯算法[J].系统仿真学报,2008,20(1), pp:407-410.
    [175] Texas Instruments. TMS320DM6467SoC Architecture and ThroughputOverview.2009, http://www.ti.com/lit/an/spraaw4b/spraaw4b.pdf.
    [176] Texas Instruments. TMS320DM6467Digital Media System-on-Chip.2009,http://focus.ti.com/lit/ds/sprs403g/sprs403g.pdf.
    [177] Texas Instruments. TMS320DM646x DMSoC Video Port Interface (VPIF)User's Guide.2009, http://www.ti.com.cn/cn/lit/ug/spruer9d/spruer9d.pdf.
    [178] Texas Instruments. TMS320DM646x DMSoC Video Data ConversionEngine (VDCE) User's Guide.2009,http://www.ti.com.cn/cn/lit/ug/sprueq9a/sprueq9a.pdf.
    [179]赵勇,袁誉乐,丁锐. DAVINCI技术原理与应用指南[M].南京:南京东南大学出版社,2008.
    [180] X. Hou, L. Zhang. Dynamic visual attention: Searching for coding lengthincrements[C]. Advances in neural information processing systems,2008, pp:681-688.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700