基于主动序列模糊图像的运动估计和振动测量
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
运动及振动参数的机器视觉测量方法广泛应用于制造业、自动化以及医疗领域。机电系统动态性能测试以及视觉伺服等工业应用领域中的目标运动速度及振动频率远远高于一般应用场合,因此高速运动估计和高频振动测量成为运动和振动参数测量的一个重要研究内容。针对高速运动估计和高频振动测量中的几个常见问题,如主动序列运动模糊图像采集和建模以及2D高速平移运动、2D高频面内振动、高速定轴转动和高频角振动以及高速离面运动的测量,本文开展了系统和深入的研究。
     1.高速运动或高频振动测量中,目标图像平面上将产生运动模糊现象。这种运动模糊一般被视为不利于图像分析的一种退化,而本文根据生物视觉中存在的“由模糊感知运动”的机制,认为图像模糊现象携带了重要的运动信息,可以在运动和振动参数测量中主动加以利用。同时,根据主动视觉原理,本文认为图像的运动模糊程度可以由视觉系统主动控制,并设计了一种主动序列运动模糊图像采集系统,该系统可以根据图像的运动模糊程度及饱和程度来自动预测和控制摄像机的曝光时间、采样周期和光电转换系数,从而产生携带足够运动模糊信息的图像或图像序列,同时避免图像的亮度饱和现象。此后,根据该图像采集系统的特点,本文提出了主动序列运动模糊图像的一般数学模型,并分析了目标作面内平移、旋转以及离面等不同形式运动时的运动模糊图像模型。
     2.高速平移运动是高速运动及高频振动中一种简单但十分重要的形式。由于运动模糊现象的存在,目前平移运动分析中常用的块匹配法难以直接应用于高速平移运动分析中。本文根据生物视觉中存在的“由模糊感知运动”的机制以及生物运动知觉中的时空整合机制,提出了一种基于主动运动模糊图像及几何矩的2D高速平移运动估计方法。该方法首先利用几何矩模拟生物运动知觉中的空间整合机制,并推导出表征平移运动模糊图像的几何矩以及运动函数之间关系的定理,根据该定理提取运动模糊图像中的运动信息,从而实现运动估计。本文的算法可简单的利用两帧运动模糊图像实现2D加速运动的估计,而现有方法需要至少三帧图像才能得到同样的结果。
     3.振动测量是机械系统测试特别是微机电系统测试中的重要内容。常用的振动测量方法是采用辅助的频闪光源采集目标的清晰图像序列并加以处理。而现有的基于运动模糊图像的方法仅针对振动周期远小于摄像机曝光时间的高频振动振幅测量方法进行了研究,并未涉及振动周期大于曝光时间的低频振动(以下及正文中简称为低频振动,但这里的低频并不是传统意义上的低频)测量问题。本文着重分析了高频及低频的振动模糊图像特点及其与非往复运动的不同之处,从而根据基于运动模糊及图像矩的高速平移运动估计方法分别设计出面内高频及低频振动测量的具体方法。本文的算法可以根据一幅清晰图像及一幅运动模糊图像或两幅连续的运动模糊图像计算出低频振动的振幅,频率,相位和振动方向以及高频振动的振幅及振动方向。同时,在图像噪声水平较高时,提出了采用短运动模糊图像序列信息融合方法来测量振动参数的思路,该方法可显著提高测量精度。此外,设计了一种简便的视觉系统参数控制方法,以提高振动频率未知时的自适应性。
     4.定轴转动及角振动普遍存在于许多工业应用领域。本文根据圆周运动模糊图像的直角坐标系模型和极坐标系模型,分别提出了两种基于主动序列运动模糊图像及几何矩的定轴转动及角振动参数的估计算法:其一是利用运动模糊图像的直角坐标系模型估计转动或高频角振动的参数。该方法首先推导出直角坐标系下运动模糊图像的几何矩与圆周运动函数的关系定理,然后根据该定理提取图像平面上的圆周运动模糊信息,从而计算出转动或角振动的参数。其二是直接将平移运动及振动估计方法映射到的极坐标平面上,从而实现转动及角振动的测量。与现有的定轴转动及角振动测量方法相比,本文提出的方法可测量曝光时间内存在较大角位移的圆周运动。
     5.离面运动测量的常用方法是激光多普勒仪、电子或数字散斑干涉以及光学剪切干涉,它们均需要额外的光源及干涉光路。通过分析离面运动模糊的成因,本文提出了散焦模糊与图像放缩运动模糊相复合的离面运动模糊图像模型,并推导出离面运动模糊图像的几何矩以及离面运动函数之间的关系定理,从而根据该定理提取离面运动模糊图像中的离面运动特征并实现离面运动估计。本文提出的方法可以较准确的测量离面运动的参数。
Vision-based measurements of motion and vibration are widely used in manufacturing, automation and medical field. In several industrial fields such as dynamic behavior testing of electromechanical system and visual servo, the velocity of the motion or the vibrational frequency are usually higher than those in ordinary fields. Therefore, measurements of high-speed motion and high-frequency vibration become important aspects for motion analysis and vibration measurement. This paper is focused on some basic issues in high-speed motion analysis and high-frequency vibration, such as the acquisition and the modeling of actively blurred image sequence, 2D high-speed translational motion analysis, in-plane translational vibration measurement, high-speed rotation and angular vibration measurement and high-speed out-of-plane motion estimation, and so on, for deep research.
     1. Images tend to be blurred when the motion velocity or the vibrational frequency of the object is large. The so-called motion blur is usually treated as image degradation. But from the“motion-from-blur”mechanism existing in biological vision, we infer that motion blur is an important cue for motion analysis and can be actively used in motion and vibration measurements. Furthermore, according to the theory of active vision, this paper considered that the degree of the motion blur can be controlled by the vision system, and designed an acquisition system of actively blurred image sequence. This image acquisition system can predict and control the exposure time and sampling period according to the degree of motion blur and saturation of previous images. Hence, it can acquire image sequences carrying sufficient motion blur information without image saturation. Considering the characteristics of the image acquisition system, we proposed the general model for the actively blurred image sequence, and analyzed variant models for different motions such as in-plane translation and rotation and out-of-plane motion.
     2. High-speed translational motion is a simple but very important form among high-speed motion and high-frequency vibration. Block matching, which is widely used in translational motion analysis breaks down when the high velocity causes motion blur in image. According to the“motion-from-blur”and spatio-temporal integration mechanisms existing in biological motion perception, this paper presented a method for estimating 2D high-speed translational motion based on actively blurred images and geometric moments. This approach relies on geometric moments to achieve spatial integration, derives a theorem for the relationship between the geometric moments of the motion blurred image and the translational motion, and extracts the motion blur information of the images to realize motion analysis. The proposed algorithm can calculate the velocity and acceleration of the motion from only 2 successive frames of motion blurred images while existing approaches estimate an accelerated motion from at least 3 images.
     3. Vibration measurement is very important for testing of mechanical system, especially for the testing of micromechanical system. General techniques employ stroboscopic illumination to“freeze”the vibrating object and acquire unblurred images for vibration measurement. And existing approaches based on motion blurred images only dealt with high-frequency vibration which period was much shorter than the exposure time, and did not mention the low-frequency cases which period was longer than the exposure time (Here note that the low-frequency does not really mean the vibrational frequency is low). This paper laid emphasis on analyzing the difference between the blurred images with low- and high-frequency vibrations and non-vibrational translation, and derived the algorithms for measuring in-plane vibration with low- and high-frequencies, respectively, based on the high-speed non-vibrational translation analysis method mentioned above. This method requires only one motion blurred image and an unblurred image or two successive frames of blurred images to calculate the parameters of the low-frequency vibration as well as the amplitude and direction of the high-frequency vibration. With the existence of high-level noise, we employed data fusion on a short sequence of motion blurred images to improve accuracy. Further, this paper designed a method for controlling the setting of the vision system to improve the adaptability of this algorithm when the frequency was unknown.
     4. Monoaxial rotation and angular vibration exist in many industrial applications. This paper proposed two algorithms based on motion blurred image sequence and geometric moments for measuring monoaxial rotation and angular vibration, according to the Cartesian coordinate system and polar coordinates, respectively. The one is based on the Cartesian model of the motion blurred image. It derives the theorem for the relationship between the geometric moments of the motion blurred images with Cartesian coordinates and the circular motion, then extracts the motion blur information on the image plane, and calculates the parameters of monoaxial rotation and angular vibration. The other maps the algorithms for measuring translational motion and vibration to the polar plane to realize measurements of the monoaxial rotation and the angular vibration. Compared with existing techniques for measuring monoaxial rotation the angular vibration, the proposed algorithms can measure circular motion with large angular displacement within the exposure time.
     5. The most widely used techniques for out-of-plane motion measurement are Laser Doppler Velocimetry (LDV), Electronic Speckle Pattern Interferometry (ESPI) and Shearography. They all need additional laser or other light source and interferential path. By analyzing the image blur caused by out-of-plane motion, this paper proposed the motion blurred image model for out-of-plane motion, which was combined with defocus blur and motion blur. We derived the theorem for the relationship between the out-of-plane motion blurred image and the out-of-plane, and then extracted the information of the out-of-plane motion to analyze of the out-of-plane motion. The proposed method can measure the parameters of the out-of-plane motion correctly.
引文
[1] S. Rothberg, J. Bell,On the application of laser vibrometry to translational and rotational vibration measurements on rotating shafts,Measurement 35(2): 201-210, 2004
    [2] R. Marsili, L. Pizzoni, G. Rossi, Vibration measurements of tools inside fluids by laser Doppler techniques: uncertainty analysis, Measurement 27(2): 111-120, 2000
    [3] E.M.Lawrence, C.Rembe, MEMS characterization using new hybrid laser Doppler vibrometer/strobe video system, Reliability, Testing, and Characterization of MEM/MOEMS III, Proc. SPIE, San Jose, CA., United States, vol. 5343, pp.45-54, 2004
    [4] E.M. Lawrence, K.E. Speller, D.L. Yu, MEMS characterization using laser Doppler vibrometry, Reliability, Testing, and Characterization of MEM/MOEMS II, Proc. SPIE, San Jose, CA, United States, vol. 4980, pp. 51-62, 2003
    [5] W.O. Wong , K.T. Chan, Quantitative vibration amplitude measurement with time-averaged digital speckle pattern interferometry, Optics & Laser Technology 30(5): 317-324, 1998
    [6]D. N. Borza, High-resolution time-average electronic holography for vibration measurement, Optics and Lasers in Engineering 41(3): 515–527, 2004
    [7] C. Buckberry, M. Reeves, A.J. Moore, et al., Application of high-speed TV-holography to time-resolved vibration measurements, Optics and Lasers in Engineering 32(4): 387-394, 1999
    [8] K.M. Abedin, M. Wahadoszamen, A.F.M.Y. Haider, Measurement of in-plane motions and rotations using a simple electronic speckle pattern interferometer, Optics & Laser Technology 34(4): 293-298, 2002
    [9] C. Shakher, S. Prakash, Monitoring/measurement of out-of-plane vibrations using shearing interferometry and interferometric grating, Optics and Lasers in Engineering 38(5): 269-277, 2002
    [10]Davis, C.Q., Freeman, D.M., Using a light microscope to measure motions with nanometer accuracy. Optical Engineering 37(4): 1299-1304, 1998
    [11] W. Hemmers, M. S. Mermelstein, D. M. Freeman, Nanometer resolution of three-dimensional motions using video interference microscopy. Proc. IEEE MEMS’99, Orlando, Florida, pp.302-308, 1999
    [12] A. Hafiane, S. Petitgrand, O. Gigan, et al., Study of sub-pixel image processing algorithms for MEMS in-plane vibration measurements by stroboscopic microscopy. Microsystems Engineering: Metrology and Inspection III,Proc. SPIE, Munich, Germany, vol. 5145(2003), 169-179
    [13] A. Bosseboeuf, S.Petitgrand, Characterization of the static and dynamic behaviour of M(O)EMS by optical techniques : status and trends. Journal of Micromechanics and Microengineering, 13(4), S23-S33, 2003
    [14] C.Rembe, R. Kant, R. S. Muller, Optical measurement methods to study dynamic behavior in MEMS, Microsystems Engineering: Metrology and Inspection, Proc.SPIE, Munich, Germany, vol. 4400 pp.127-137, 2001
    [15] N.F. Smith, W.P. Eaton, D.M. Tanner, J.J. Allen, Development of characterization tools for reliability testing of MicroElectroMechnical System actuators. Proceedings of the 1999 MEMS Reliability for Critical and Space Applications, Proc. SPIE, Santa Clara, CA, USA, vol. 3880, pp.156-164, 1999
    [16]B. Serio, J. J. Hunsinger, D. Teyssieux, et al.,Phase correlation method for subpixel in-plane vibration measurements of MEMS by stroboscopic microscopy, Optical Measurement Systems for Industrial Inspection IV, Proc. of SPIE Munich, Germany, vol. 5856, pp.755-762, 2005
    [17] S. Petitgrand, A. Bosseboeuf, Simultaneous mapping of out-of-plane and in-plane vibrations of MEMS with (sub)nanometer resolution, Journal of Micromechanics and Microengineering 14(9): S97-101, 2004
    [18] A. Ongkodjojo, F.E.H.Tay, Characterizations of micromachined devices using planar motion analyzer (PMA), 4th IEEE Conference on Sensors, Irvine, CA, vols. 1 and 2, pp.361-364, 2005
    [19]D. J.Burns and H. F. Helbig, A system for automatic electrical and optical characterization of microelectromechanical devices, Journal of microelectromechanical system 8(4) 473-482, 1999.
    [20] S.G. Wu, L. Hong, Hand tracking in a natural conversational environment by the interacting multiple model and probabilistic data association (IMM-PDA) algorithm, Pattern Recognition 38 (11): 2143-2158, 2005
    [21] M.A. Garcia-perez, E.Peli, Simple non-invasive measurement of rapid eye vibration, Journal of Sound and Vibration 262(4): 877-888, 2003
    [22] H.Z. Ning, T.N. Tan, L.Wang, et al., People tracking based on motion model and motion constraints with automatic initialization, Pattern Recognition, 37(7): 1423-1440, 2004
    [23] H.Z. Ning, T.N. Tan, L.Wang, et al., Kinematics-based tracking of human walking in monocular video sequences, Image and Vision Computing 22 (5): 429-441, 2004
    [24] L.Wang, T.N. Tan, H.Z. Ning, W.M.Hu, Silhouette analysis-based gait recognition for human identification, IEEE Transactions on Pattern analysis and Machine Intelligence 25 (12): 1505-1518, 2003
    [25] W.M. Lu, Y.P. Tan, A vision-based approach to early detection of drowning incidents in swimming pools, IEEE Transactions on Circuits and Systems for Video Technology, 14 (2): 159-178, 2004
    [26]J.G. Lou, T.N. Tan, 3-D model-based vehicle tracking,IEEE Transactions on Image Processing, 14(10): 1561-1569, 2005
    [27] T. N. Tan and K. D. Baker, Efficient image gradient based vehicle localization, IEEE Transactions on Image Processing, 9(11): 1343-1356, 2000.
    [28] Y. Hao, J. G. Lou, H. Z. Sun, et al., Efficient and robust vehicle localization, Proc. Int. Conf. Image Process., Thessaloniki, Greece, vol.2, pp.355-358, 2001,.
    [29] R. Cutler and L. S. Davis, Model-based object tracking in monocular image sequences of road traffic scenes, IEEE Transactions on Pattern analysis and Machine Intelligence, 22(8): 781-796, 2000.
    [30] M. Haag and H. H. Nagel, Combination of edge element and optical flow estimates for 3-D-model-based vehicle tracking in traffic image sequences, International Journal of Computer Vision, 35(3): 295-319, 1999
    [31] W.M. Hu, X.J. Xiao, D. Xie, et al., Traffic accident prediction using 3-D model-based vehicle tracking, IEEE Transactions on Vehicular Technology 53 (3): 677-694, 2004
    [32] H. Li, and S.X. Yang, A behavior-based mobile robot with a visual landmark-recognition system,IEEE/ASME Transactions on Mechatronics, 8(3):390-400, 2003
    [33]S.Y. T. Lang, and Y.L. Fu, Visual measurement of orientation error for a mobile robot,IEEE Transactions on Instrumentation and Measurement, 49(6)1344-1357, 2000
    [34] P. Saeedi,P. D. Lawrence,and D. G. Lowe,Vision-based 3-D trajectory tracking for unknown environments,IEEE Transactions on Robotics, 22(1):119-136, 2006
    [35] H. Zhang,and J. P. Ostrowski,Visual motion planning for mobile robots, IEEE Transactions on Robotics and Automation, 18(2): 199-208, 2002
    [36]N. R. Gracias, S. van der Zwaan, A.Bernardino, et al.,Mosaic-based navigation for autonomous underwater vehicles,IEEE Journal of Oceanic Engineering, 28(4): 609-624, 2003
    [37]Purang Abolmaesumi, Septimiu E. Salcudean, Wen-Hong Zhu, et al., Image-guided control of a robot for medical ultrasound, IEEE Transactions on Robotics and Automation, 18(1): 11-22, 2002
    [38]J. S. Park and M. J. Chung, Path planning with uncalibrated stereo rig for image-based visual servoing under large pose discrepancy,IEEE Transactions on Robotics and Automation, 19(2): 250-258, 2003
    [39] M. Tigges, T. Wittenberg, P. Mergell, et al., Imaging of vocal fold vibration by digital multi-plane kymography, Computerized Medical Imaging and Graphics 23(6): 323–330, 1999
    [40]J. Lohscheller, M. Dollinger, M. Schuster, et al., Quantitative investigation of the vibration patternof the substitute voice generator, IEEE Transactions on Biomedical Engineering, 51(8): 1394-1400 2004
    [41]G. Stalidis, N. Maglaveras, S. N. Efstratiadis, et al., Model-based processing scheme for quantitative 4-D cardiac MRI analysis, IEEE Transactions on Information Technology in Biomedicine, 6(1): 59-72, 2002
    [42] P. Olaszek, Investigation of the dynamic characteristic of bridge structuresusing a computer vision method, Measurement 25(3): 227-236, 1999
    [43]R. Cucchiara, C. Grana, M. Piccardi, et al., Detecting moving objects, ghosts, and shadows in video streams, IEEE Transactions on Pattern Anaylysis and Machine Intelligence, 25(10): 1337-1342, 2003
    [44]M. Heikkila and M. Pietikainen, A texture-based method for modeling the background and detecting moving objects, IEEE Transactions on Pattern Anaylysis and Machine Intelligence, 28(4): 657-662, 2006
    [45]S.C.Chen, M.L. Shyu, S. Peeta, et al., Learning-based spatio-temporal vehicle tracking and indexing for transportation multimedia database systems, IEEE Transactions on Intelligent Transportation Systems, 4(3):154-166,2003
    [46] H.L. Eng, J.X. Wang, A. H. K. S. Wah, et al., Robust human detection within a highly dynamic aquatic environment in real time, IEEE Transactions on Image Processing, Vol. 15, No. 6, JUNE 2006 1583-1599
    [47] L. Wang, T.N. Tan, W.M. Hu, et al., Automatic gait recognition based on statistical shape analysis, IEEE Transactions on Image Processing, 12(9) 1120-1131, 2003
    [48] I. Haritaoglu, D. Harwood, L.S. Davis, et al., real-time surveillance of people and their activities, IEEE Transactions on Pattern Anaylysis and Machine Intelligence 22 (8): 809–830, 2000.
    [49] L.Y. Li, W.M. Huang, I.Y.H. Gu, et al., Statistical modeling of complex backgrounds for foreground object detection, IEEE Transactions on Image Processing, 13(11): 1459-1472,2004
    [50] M. B. Vanleeuwen and F. C.A.Groen, Vehicle detection with a mobile camera: spotting midrange, distant, and passing cars, IEEE Robotics & Automation Magazine, 12(1): 37-43, 2005
    [51] R.T. Collins, et al., A system for video surveillance and monitoring: VSAM Anal report, CMU-RI-TR-00-12, Technical Report, Carnegie Mellon University, 2000.
    [52]J. L. Barron, D. J. Fleet, and S. S. Beauchem, Performance of optical flow techniques, International Journal of Computer Vision, 12(1): 43-77, 1994.
    [53]B. K. P. Horn and B. G. Schunk, Determining optical flow, Artificial Intelligence, 17(1-3): 185-204, 1981.
    [54] S.J. Sun, D. Haynor, and Y.M. Kim, Motion estimation based on optical flow with adaptive gradients, Proceedings of 7th IEEE International Conference on Image Processing, Vancouver, BC, Canada , vol.1, pp. 852-855, 2000
    [55]A. Mitiche and A.R. Mansouri, On convergence of the Horn and Schunck optical-flow estimation method, IEEE Transactions on Image Processing, 13(6): 848-852, 2004
    [56] H. Foroosh, Pixelwise-adaptive blind optical flow assuming nonstationary statistics, IEEE Transactions on Image Processing, 14(2): 222-230,2005
    [57] E. Francomano, A. Tortorici V. Calderone, Regularization of optical flow with M-band wavelet transform,Computers and Mathematics with Applications 45(1-3): 437-452, 2003
    [58]B. Lucas and T. Kanade, An iterative image regitration technique with applications in stereo vision, Proceedure of the DARPA Image Understanding Workshop, pp. 121-130, 1981.
    [59] H.-H. Nagel, W. Enkemmann, An investigation of smoothness constraints for the estimation of displacement vector fields from image sequence, IEEE Transactions on Pattern Anaylysis and Machine Intelligence 8 (5) : 565–593, 1986
    [60] H.-H. Nagel, On the estimation of optical flow: relations between different approaches and some new results, Artificial Intelligence, 33(3): 299–324. 1987
    [61]S. Negahdaripour, Revised definition of optical flow: integration of radiometric and geometric cues for dynamic scene analysis, IEEE Transactions on Pattern Anaylysis and Machine Intelligence 20 (9): 961–979, 1998
    [62]C.H.Teng, S.H. Lai, Y.S.Chen, et al., Accurate optical flow computation under non-uniform brightness variations, Computer Vision and Image Understanding 97(3): 315–346, 2005
    [63] C.H.Teng, S.H. Lai, Y.S.Chen, An accurate and adaptive optical flow estimation algorithm, IEEE ICIP’04 Proceedings, Singapore, vol.3, pp.1839-1842
    [64] M. Yeasin, Optical flow in Log-mapped image plane—A new approach, IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(1): 125-131, 2002
    [65]Z. X. Zhang, J. Z. Li, X. Q. Wei, Robust computation of optical flow field with large motion, IEEE ICSP’04 Proceedings, Beijing, China, vol.1-3, pp.893-896, 2004
    [66]W. J. Christmas, Filtering requirements for gradient-Based optical flow measurement, IEEE Transactions on Image Processing, 9(10):1817-1820, 2000
    [67]P. Anandan, A computational framework and an algorithm for the measurement of visual motion, International Journal of Computer Vision, 2(3): 283-310, 1989.
    [68]M.J. Black and P. Anandan, The Robust Estimation of Multiple Motions: Parametric and Piecewise-Smooth Flow Fields, Computer Vision and lmage Understanding, 63(1): 75-104, 1996.
    [69] A. Singh, An estimation-theoretic framework for image flow computation, Proceedings of the International Conference on Computer Vision, Osaka, pp.168–177, 1990
    [70] J. S. Zelek,Towards Bayesian real-time optical flow,Image and Vision Computing 22(12) 1051-1069, 2004
    [71]D.S. Zhang, G.J. Lu, An edge and color oriented optical flow estimation using block matching, IEEE ICSP2000, Beijing, China, vol.1-3, pp.1026-1032, 2000
    [72]C.M. Sun, Fast optical flow using 3D shortest path techniques, Image and Vision Computing 20(13-14): 981–991, 2002
    [73]D. Fleet and A. Jepson, Computation of component image velocity from local phase information, International Journal of Computer Vision, 5(1): 77-104, 1990.
    [74]D. Heeger, Optical flow using spatiotemporal filters, International Journal of Computer Vision, 1(4): 270-302, 1988.
    [75]T. Gautama and M. M. Van Hulle, A phase-based approach to the estimation of the optical flow field using spatial filtering, IEEE Transactions on Neural Networks, 13(5): 1127-1136, 2002
    [76] I.A. Karaulova, P.M. Hall and A.D. Marshall, A hierarchical model of dynamics for tracking people with a single video camera, Proceedings of the 11th British Machine Vision Conference, Bristol, UK, vol.1, pp.352-361, 2000.
    [77] X.Y. Zhang, Y.C. Liu, and T. S. Huang, Motion analysis of articulated objects from monocular images, IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4): 625-636, 2006
    [78] Y. Huang and T.S. Huang, Model-based human body tracking, Proc. of 16th International Conference on Pattern Recognition, Quebec City, Que., Canada, vol.1, pp.552-555, 2002.
    [79]G. Mori and J. Malik, Estimating human body configurations using shape context matching, Proc. of 7th European Conference on Computer Vision, Copenhagen, Denmark, pp. 666-680, 2002.
    [80] Y. Song, L. Goncalves, E. Di Bernardo, and P. Perona, Monocular perception of biological motion in johansson displays, Computer Vision and Image Understanding, 81(3): 303-327, 2001.
    [81]Y. Song, L. Goncalves, and P.Perona, Unsupervised learning of human motion, IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(7): 814-827, 2003
    [82] T. N. Tan, G. D. Sullivan, and K. D. Baker, Model-based localization and recognition of road vehicles, International Journal of Computer Vision, 27(1): 5–25, 1998.
    [83] W. W. Lok and K. L. Chan , Model-based human motion analysis in monocular video, ICASSP, Philadelphia, PA, USA, vol.2, pp. 697-700, 2005
    [84]E. L. Andrade, J. C. Woods, E. Khan, et al., Region-based analysis and retrieval for tracking ofsemantic objects and provision of augmented information in interactive sport scenes, IEEE Transactions on Multimedia, 7(6): 1084-1096 , 2005
    [85] S. McKenna, S. Jabri, Z. Duric, A. Rosenfeld and H. Wechsler, Tracking groups of people, Computer Vision and Image Understanding, 80 (1): 42-56, 2000.
    [86] C.Y. Xu, and J. L. Prince, Snakes, shapes, and gradient vector flow, IEEE Transactions on Image Processing, 7(3): 359-369,1998
    [87] N. Paragios and R. Deriche, Geodesic active contours and level sets for the detection and tracking of moving objects, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (3): 266-280, 2000.
    [88] N. Peterfreund, Robust tracking of position and velocity with kalman snakes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (6): 564-569, 2000.
    [89]Annalisa Milella and Roland Siegwart, Stereo-based ego-motion estimation using pixel tracking and iterative closest point, Proceedings of the Fourth IEEE International Conference on Computer Vision Systems (ICVS 2006), pp.
    [90] M. R. Banham and A. K. Katsaggelos, Digital image restoration, IEEE Signal processing magazine, 1997, 14(2): 24-41
    [91] Y. Yitzhaky and N. S. Kopeika, Identification of blur parameters from motion blurred images, Graphical Models and Image processing, 59(5): 310–320, 1997
    [92] K. Panchapakesan, D. G. Sheppard, M. W. Marcellin, et al., Blur identification from vector quantizer encoder distortion, IEEE Transactions on Image Processing, 10(3): 465-470, 2001
    [93]A. N. Rajagopalan and Subhasis Chaudhuri, A recursive algorithm for maximum likelihood-based identification of blur from multiple observations, IEEE Transactions on Image Processing, 7(7): 1075-1079, 1998
    [94]H. Stark, P. Oskoui, High resolution image recovery from image-plane arrays using convex projections. Journal of the Optical Society of America A, 6(11):1715-1726, 1989
    [95]M.R.P. Homem, N.D.A. Mascarenhas, L.F. Costa1 and C. Preza, Biological image restoration in optical-sectioning microscopy using prototype image constraints, Real-Time Imaging 8(6) 475–490 (2002)
    [96] A. J. Patti, and Y. Altunbasak, Artifact reduction for set theoretic super resolution image reconstruction with edge adaptive constraints and higher-order interpolants. IEEE Transactions on Image Processing, 10(1):179-186, 2001
    [97] R. R.Schulz and R. L. Stevenson, Extraction of high-resolution frames from video sequences, IEEE Transactions on Image P rocessing, 5(6):996-1011, 1996
    [98] R. C. Hardie, K.J. Barnard and E.E.Armstrong, Joint MAP registration and high-resolution image estimation using a sequence of undersampled images, IEEE Transactions on Image Processing, 6(12):1621-1633, 1997
    [99]G. Chantas, N. Galatsanos, and A. Likas,Maximum a posteriori image restoration based on a new directional continuous edge image prior, 2005 International Conference on Image Processing, Genova, Italy, vol.1,pp.941-944, 2005
    [100]Y. Wan and R. D. Nowak , A wavelet-based statistical model for image restoration, Proceedings 2001 International Conference on Image Processing, IEEE, Thessaloniki, Greece,vol1, pp.598-601, 2001
    [101]X.G.Cao, M.Yi, X.L. Wang, et al., Image restoration with edge-preserving regularization in wavelet domain, IEEE International Conference on Networking, Sensing and Control, Tucson, AZ,pp.543-548, 2005
    [102]N. K. Bose, M. K. Ng, A. C. Yau, Super-resolution image restoration from blurred observations, IEEE International Symposium on Circuits and Systems, Kobe, Japan vol.6, pp.6296-6299, 2005
    [103]S. Z. Li, Toward global solution to MAP image restoration and segmentation: using common structure of local minima, Pattern Recognition 33(4): 715-723, 2000
    [104] B. C.Tom, and A. K.Katsaggelos, Reconstruction of a high-resolution image by simultaneous registration, restoration, and interpolation of lo w-resolution images, Proc.1995 IEEE Int.Conf.Image P rocessing,Washington,DC, vol.2, pp. 539-542., 1995
    [105] D. Kundur, ,and D.Hatzinakos, Blind image deconvolution, IEEE Signal Processing Magzine,. 13(5):43-64, 1996
    [106] D. Kundur, ,and D.Hatzinakos, A novel blind deconvolution scheme for image restoration using recursive filtering, IEEE Transactions on Signal Processing, 46(2):375-389, 1998
    [107]X. Yu, C.R. Zou and L.X. Yang, Improved recursive inverse filtering method for blind image restoration,IEEE ICSP’02 Proceedings, Beijing, China, vol.1, pp.37-40, 2002
    [108]Xue Mei, Yang Luxi, Zou Cairong, et al., A modified NAS-RIF blind image restoration algorithm for noisy binary images, IEEE ICSP’02 Proceedings, Beijing, China, vol.1, pp.81-84, 2002
    [109]T.W.S.Chow, Xiao-Dong Li and S.-Y.Cho, Improved blind image restoration scheme using recurrent filtering, IEE Proceedings-Vision, Image and Signal Processing, 147(1):23-28,2000
    [110]J. Flusser, T. Suk, and S. Saic, Recognition of images degraded by linear motion blur without restoration, Computing Suppl., vol. 11, pp. 37-51, 1996.
    [111]J. Flusser, T. Suk, and S. Saic, Recognition of blurred images by the method of moments, IEEE Transactions on Image Processing, 5(3): 533-538, 1996.
    [112]J. Flusser and T. Suk, Invariants for Recognition of Degraded 1-D Digital Signals, Proc. 13th Int’l Conf. Pattern Recognition, Vienna, Austria,vol. 2, pp. 389-393, 1996.
    [113] J. Flusser and T. Suk, Degraded image analysis: An invariant approach. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(6): 590-603, 1998
    [114] T. Suk and J. Flusser, Combined blur and affine moment invariants and their use in pattern recognition, Pattern Recognition 36(12) 2895-2907, 2003
    [115] J. Flusser, J. Boldys, and B. Zitova, Moment forms invariant to rotation and blur in arbitrary number of dimensions, IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(2):234-246, 2003
    [116]Y. N.Zhang, C.Y.Wen, Y. Zhang, Estimation of motion parameters from blurred images, Pattern Recognition Letters 21(5): 425-433, 2000
    [117] J. Liu, T.X Zhang, Recognition of the blurred image by complex moment invariants, Pattern Recognition Letters 26(8): 1128-1138, 2005
    [118] W. -G. Chen, N. Nandhakumar, W. N.Martin, Image motion estimation from motion smear-a new computational model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(4), 412-424,1996
    [119]汪国宝,王石刚,于新瑞,徐威,高频振动振幅的视觉测量,机械工程学报,40(4):, 2004
    [120]白顺科,汪凤泉,转速的图像测量方法,东南大学学报,29(14): 149-153,1999
    [121]白顺科,汪凤泉,振幅测量的时间平均成像法,工程力学,16(6): 107-112,1999
    [122]白顺科,汪凤泉,角振幅的图像测量方法,振动工程学报,12(4): 481-485,1999
    [123]白顺科,汪凤泉,运动时间历程测量的积分成象方法,应用科学学报,18(2):156-160, 2000
    [124]白顺科,汪凤泉,随机振动幅值特征的图像测量方法, 振动、测试与诊断,20(1): 24-20,2000
    [125]康新,微电子机械系统(MEMS)中的光学测试方法研究,博士学位论文,96-102,2003
    [126] S.T.Hammett, M. A. Georgeson, A. Gorea, Motion blur and motion sharpening: temporal smear and local contrast non-linearity. Vision Research 38(14): 2099–2108, 1998
    [127] S.Chen, H. E. Bedell, H. Ogmen, A target in real motion appears blurred in the absence of other proximal moving targets, Vision Research 35(16): 3215–3328, 1995
    [128] A. K. Paakkonen, and M. J. Morgan, Effects of motion on blur discrimination. Journal of the Optical Society of America A 11(3) 992–1002, 1994
    [129] S.T.Hammett, Motion blur and motion sharpening in the human visual system, Vision Research 37(18): 2505–2510, 1997
    [130] D. C.Burr, Motion smear. Nature 284(5752): 164–165, 1980
    [131] D. C.Burr, J.Ross, M. C.Morrone, Seeing objects in motion. Proc. R. Soc. Lond. B Biol. Sci. 227(1247): 249–265,1986
    [132] C. H. Anderson and D. C.Van Essen, Shifter circuits: a computational strategy for dynamic aspects of visual processing. Proc. Natl. Acad. Sci. USA 84(17), 6297–6301.
    [133] A.-C. Aho, K. Donner, S. Helenius, et al., Visual performance of the toad (Bufo bufo) at low light levels: retinal ganglion cell responses and prey-catching accuracy. Journal of Comparative Physiology A 172(6): 671–682, 1993
    [134] L. O. Larsen, and J. N.Pedersen, The snapping response of the toad, Bufo bufo, towards prey dummies at very low light intensities. Amphibia-Reptilia 2: 321–327, 1982
    [135] E. J.Warrant, Seeing better at night: life style, eye design and the optimum strategy of spatial and temporal summation. Vision Research 39(9), 1611-1630, 1999
    [136]阮秋琦,数字图像处理学,电子工业出版社,北京,2001
    [137] Ofer Hadar, Stanley R. Rotman, Norman S. Kopeika, Target acquisition modeling of forward-motion considerations for airborne reconnaissance over hostile territory, Optical Engineering / 33(9): 3106-3117, 1994
    [138]Ofer Hadar, Itai Dror, Norman S. Kopeika, Image resolution limits resulting from mechanical vibrations. Part IV: real-time numerical calculation of optical transfer functions and experimental verification, Optical Engineering 33 (2): 566-578, 1994
    [139]O. Hadar, M. Fisher, N. S. Kopeika, M, Image resolution limits resulting from mechanical vibrations. Part Ill: numerical calculation of modulation transfer function, Optical Engineering 31(3): 581-589, 1992
    [140]Yitzhak Yitzhaky, Ruslan Milberg, Sergei Yohaev, et al. Comparison of direct blind deconvolution methods for motion-blurred images, Applied Optics, 38(20): 4325-4332, 1999
    [141] D. Ziou and F. Deschenes, Depth from defocus estimation in spatial domain, Computer Vision and Image Understanding 81(2), 143–165 (2001)
    [142] A. N. Rajagopalan and S. Chaudhuri, space-variant approaches to recovery of depth from defocused images, Computer Vision and Image Understanding 68(3): 309–329, 1997
    [143] J.Y. Aloimonos, I. Weiss, A.Bandyopadhyay, Active vision, International Journal of Computer Vision 1(4): 333–356. 1988
    [144] R. Bajcsy, Active perception versus passive perception. In: Proceedings of the 3rd IEEE workshop on computer vision, Bellaire, MI, IEEE Press, Los Alamitos, CA , 1985.
    [145] R. Bajcsy, Active perception. Proceedings of the IEEE 76(8), 996–1005. 1988,
    [146] G.Backer, B. Mertsching, and M. Bollmann, Data- and Model-Driven Gaze Control for an Active-Vision System, IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 1415-1429, 2001
    [147] A. L.Yarbus, Eye movements and vision. Plenum, New York,1967.
    [148] M. V. Srinivasan, S. Venkatesh, (eds) From living eyes to seeing machines. Oxford University Press, Oxford, 1997
    [149] D. Floreano, T. Kato, D. Marocco, et al., Coevolution of active vision and feature selection, Biological Cybernetics 90(3): 218–228, 2004
    [150]K. Fukushima, A neural network for visual pattern recognition, IEEE computer, no.3: 65-75,1988,
    [151]K. Fukushima, A neural network model for selective attention in visual pattern recognition, Biological Cybernetics, 55(1): 5-15, 1986
    [152] K. Fukushima, A neural network model for selective attention in visual pattern recognition and associative recall, Applied Optics, 26(23): 4985-4992, 1987
    [153]G. Indiveri,A neuromorphic VLSI device for implementing 2-D selective attention systems,IEEE Transactions on Neural Networks,12(6): 1455-1463, 2001
    [154]G. Indiveri, R. Mürer, and J. Kramer, Active vision using an analog VLSI model of selective attention, IEEE Transactions on Circuits and Systems—II: Analog and Digital Signal Processing, 48(5): 492-500, 2001
    [155]K.W. Lee, H. Buxton, and J.F. Feng, Cue-guided search: a computational model of selective attention, IEEE Transactions on Neural Networks, 16(4): 910-923, 2005
    [156]T. Wada, and T. Matsuyama, Multiobject behavior recognition by event driven selective attention method, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8): 873-887, 2000
    [157]Tonia G. Morris, Timothy K. Horiuchi, and Stephen P. DeWeerth, Object-based selection within an analog VLSI visual attention system, IEEE Transactions on Circuits and Systems—II: Analog and Digital Signal Processing, 45(12): 1564-1572, 1998
    [158]T. Kirishima, K.Sato, and K. Chihara, Real-time gesture recognition by learning and selective control of visual interest points, IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(3): 351-364, 2005
    [159]Cheng-Yuan Liou, and Hsin-Chang Yang, Selective feature-to-feature adhesion for recognition of cursive handprinted characters, IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(2): 184-191, 1999
    [160]C. M. Privitera and L. W. Stark, Algorithms for defining visual regions-of-interest: comparison with eye fixations, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(9): 970-982, 2000
    [161]Z. Wang, L.G. Lu, and A. C. Bovik, Foveation scalable video coding with automatic fixation selection, IEEE Transactions on Image Processing, 12(2): 243-253, 2003
    [162]L. H. Yu, and M. Eizenman, A new methodology for determining point-of-gaze in head-mounted eye tracking systems, IEEE Transactions on Biomedical Engineering, 51(10): 1765-1773, 2004
    [163]A. lynn Abboff, A survey of selective fixation control for machine vision, IEEE Control Systems Magazine,12(4): 25-31, 1992
    [164]D. Raviv, and M. Herman, A unified approach to camera fixation and vision-based road following, IEEE Transactions on Systems, Man, and Cybernetics, 24(8): 1125-1141, 1994
    [165]M. J. Barth and S. Tsuji, Egomotion determination through an intelligent gaze control strategy, IEEE Transactions on Systems, Man, and Cybernetics, 23(5): 1424-1432, 1993
    [166]A. Adam, E. Rivlin, and H. Rotstein, Fusion of fixation and odometry for vehicle navigation, IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans, 29(6): 593-603, 1999
    [167]C. Brown, Gaze controls with interactions and delays, IEEE Transactions on Systems, Man, and Cybernetics, 20(1): 518-527, 1990
    [168]Jorge Dias, Carlos Paredes, Inacio Fonseca, et al., Simulating Pursuit with Machine Experiments with Robots and Artificial Vision, IEEE Transactions on Robotics and Automation, 14(1): 1-18,1998
    [169]W.-C. Kim, J.-H. Kim, M. Lee, et al., Smooth pursuit eye movement system using artificial retina chip and shape memory alloy actuator, IEEE Sensors Journal, 5(3): 501-509, 2005
    [170] M. Bollmann, C. Justkowski, and B. Mertsching, Utilizing color information for the gaze control of an active vision system, Proc. Fourth Workshop Farbbildverarbeitung, V. Rehrmann, ed., pp. 73-79, 1998.
    [171] M. Kass, A. Witkin, and D. Terzopoulos, Snakes: Active contour models, International Journal of Computer Vision 1(4)321–331, 1987.
    [172] D. Terzopoulos and K. Fleischer, Deformable models, Visual Computer,4(6): 306–331, 1988.
    [173] C.Y. Xu, and Jerry L. Prince, Snakes, Shapes, and Gradient Vector Flow,IEEE Transactions on Image Processing,7(3):359-369, 1998
    [174] X. Han, C. Xu, and J. L. Prince, A Topology Preserving Level Set Method for Geometric Deformable Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(6):755-768, June 2003.
    [175] R. D.Gregorio, and V. Parenti-Castelli, Mobility analysis of the 3-UPU parallel mechanism assembled for a pure translational motion, Journal of Mechanical Design, Transactions of the ASME 124(2):259-264, 2002
    [176] F. H. Schuling, H. A. K. Mastebroek, R. Bult, et al., Properties of elementary movement detectors in the fly Calliphora erythrocephala. Journal of Comparative Physiology A 165:179–192, 1989
    [177] M. V. Srinivasan, and D. R. Dvorak, Spatial processing of visual information in the movement-detecting pathway of the fly. Journal of Comparative Physiology 140(1):1–23, 1980
    [178] B.Guan, S. Wang, G. Wang, A biologically inspired method for estimating 2D high-speed translational Motion. Pattern Recognition Letters 26(15), 2450-2462, 2005
    [179] R. J. Prokop and A. P. Reeves, A survey of moment-based techniques for unoccluded object representation and recognition, CVGIP: Graphical Models and Image Processing, 54(5): 438-460, 1992.
    [180]M. K. Hu, Visual pattern recognition by moment invariants, IRE Transactions on Information Theory, 8(2): 179-187, 1962.
    [181]E. P. Lyvers, O. R. Mitchell, M. L. Akey, et al., Subpixel measurements using a moment-based edge operator, Pattern Analysis and Machine Intelligence, IEEE Transactions on, 11(12): 1293-1309, 1989.
    [182]L. M. Luo, C. Hamitouche, J. L. Dillenseger, et al., A moment-based three-dimensional edge operator, Biomedical Engineering, IEEE Transactions on, 40(7): 693-703, 1993.
    [183]L.-M. Luo, X.-H. Xie, and X.-D. Bao, A modified moment-based edge operator for rectangular pixel image, Circuits and Systems for Video Technology, IEEE Transactions on, 4(6): 552-554, 1994.
    [184]R. Mukundan, Estimation of quaternion parameters from two dimensional image moments, CVGIP: Graphical Models and Image Processing, 54(4) 345-350, 1992.
    [185]J. Brochard, L. Coutin, and M. Leard, Modelling of rigid objects by bidimensional moments. Applications to the estimation of 3D rotations, Pattern Recognition, 29(6): 889-902, 1996.
    [186]R. Mukundan and K. R. Ramakrishnan, An iterative solution for object pose parameters using image moments, Pattern Recognition Letters, 17(12):1279-1284, 1996.
    [187]B. G. Mertzios and K. Tsirikolias, Statistical shape discrimination and clustering using an efficient set of moments, Pattern Recognition Letters, 14(6): 517-522, 1993.
    [188]A. P. Reeves, R. J. Prokop, S. E. Andrews, et al., Three-dimensional shape analysis using moments and Fourier descriptors, Pattern Analysis and Machine Intelligence, IEEE Transactions on, 10(6): 937-943, 1988.
    [189]M. Gruber and K. Y. Hsu, Moment-based image normalization with high noise-tolerance, IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(2): 136-139, 1997.
    [190]Y. Zhang, Y. Zhang, and C. Wen, A new focus measure method using moments, Image and Vision Computing, 18(12): 959-965, 2000.
    [191]F. Chaumette, Image moments: A general and useful set of features for visual servoing, IEEE Transactions on Robotics and Automation, 20(4):713-723, 2004.
    [192]X. J. Shen and J. M. Pan, Monocular visual servoing based on image moments, IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, vol. E87A, no. 7, pp. 1798-1803, 2004.
    [193]M. Tuceryan, Moment-based texture segmentation, Pattern Recognition Letters, 15(7): 659-668, 1994.
    [194]Q. Gao and F.-F. Yin, Two-dimensional direction-based interpolation with local centered moments, Graphical Models and Image Processing, 61(6): 323-339, 1999.
    [195]J. H. Sossa-Azuela, C. Yanez-Marquez, and J. L. D. d. Leon S, Computing geometric moments using morphological erosions, Pattern Recognition, 34(2): 271-276, 2001.
    [196]S. Belkasim and M. Kamel, Fast computation of 2-D image moments using biaxial transform, Pattern Recognition, 34(9): 1867-1877, 2001.
    [197]C.-H. Wu, S.-J. Horng, and P.-Z. Lee, A new computation of shape moments via quadtree decomposition, Pattern Recognition, 34(7): 1319-1330, 2001.
    [198]J. Martinez and F. Thomas, Efficient computation of local geometric moments, Image Processing, IEEE Transactions on, 11(9): 1102-1111, 2002.
    [199] M. Suhling, M. Arigovindan, P. Hunziker, et al., Multiresolution moment filters: Theory and applications, IEEE Transactions on Image Processing, 13(4): 484-495, 2004.
    [200] A. Stern, N. S. Kopeika, Analytical method to calculate optical transfer functions for image motion and vibrations using moments, Journal of the Optical Society of America A: Optics and Image Science, and Vision 14(2):388-396, 1997
    [201] A. Stern, E. Kempner, A. Shukrun, et al., Restoration and resolution enhancement of a single image from a vibration-distorted image sequence. Optical engineering 39, 2451-2457, 2000
    [202] Y.Yitzhaky, G.Boshusha, Y. Levy, et al., Restoration of an image degraded by vibrations using only a single frame. Optical Engineering 39(9):2083-2091, 2000
    [203] M.Ben-Ezra, S. K.Nayar, Motion-based motion deblurring. IEEE transactions on pattern analysis and machine intelligence 26(6): 689-698, 2004
    [204] O.Hadar, Z. Adar, A. Cotter, et al., Restoration of image degraded by extreme mechanical vibrations. Optics and Lasers in Engineering 29(4):171-177, 1997.
    [205] T?rn, A., Ali, M.M., Viitan, S., Stochastic global optimization: problem classes and solution techniques. Journal of Global Optimization 14(4): 437–447, 1999
    [206] A.R.Conn, N.I.M.Gould, Ph.L.Toint, Trust-region methods, SIAM, Philadelphia, 2000.
    [207] G. Gebert, D.Snyder, J.Lopez, et al.: Optical flow angular rate determination, IEEE International Conference on Image Processing, Barcelona, Spain vol.1, pp.949-952, 2003
    [208] J.Y. Chang, W.-F. Hu, M.-H. Cheng, et al., Digital image translational and rotational motion stabilization using optical flow technique, IEEE Transactions on Consumer Electronics 48(1):108-115, 2002
    [209] M.Kong, B. K. Ghosh, Rotational and translational motion estimation and selective reconstruction in digital image sequences, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing-Proceedings, Phoenix, AZ, USA, vol.6 pp.3353-3356, 1999

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700