手眼式膝关节手术辅助机器人研究及准临床实验
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着医学理论的不断进步与发展,外科手术也正朝着更加精细和复杂的方向发展。近几十年来高速发展的计算机、机器人、电子信息以及网络通信等技术已经越来越多地应用于医学领域。“机器人辅助外科手术”(RAS, Robot Aided Surgery)系统在现代临床医疗中已经被越来越多地采用。
     每年成千上万患有膝关节疾病的病人接受全膝关节置换手术(TKR, Total Knee Replacement),期望在一定程度上恢复行动,减少痛苦。目前,手术中假体位置主要由临床经验和专用模板保证,是误差主要来源。针对基于传统机械导航模板的人工膝关节手术的诸多缺陷,计算机或机器人辅助全膝关节置换手术应运而生。相比人工膝关节置换手术,采用计算机与机器人进行辅助手术,并使用专家系统进行受力、运动的评估,获取膝关节假体的最优放置位置,可以使机器人辅助膝关节手术具有更好的操作精度,提高膝关节置换手术的质量与成功率。
     传统机器人辅助全膝关节置换手术系统中存在的初始定位难,以及病人必须接受两次手术等诸多缺点一直是人们的研究热点之一。本文基于机器人集成手术系统理念,针对传统全膝关节置换手术的缺点,提出了手眼式机器人辅助外科手术(HERAS, Hand-Eye Robot Aided Surgery)模型。此模型将摄像机与刀具固定在机器人末端执行器上,利用相机标定与手眼标定技术,构建动态导航系统,能克服静态手术导航中视角不能轻易改变的缺点;应用于全膝关节置换术,根据红外探针获取的膝关节上生理标志点的位置,能得到术中切割平面的准确位置及精确的下肢对线,提高手术精度。
     本论文的工作包含四个方面:一是设计并实现了符合临床手术要求的基于HERAS模型的准临床手术实验系统,二是HERAS中的在线手眼标定问题,三是HERAS中的多体运动分割问题,最后是HERAS模型在膝关节手术各种实验中的具体应用与分析。综观全文,本论文的主要创新性研究成果包括以下内容:
     1)设计并实现了符合临床手术要求的基于HERAS模型的准临床手术实验系统。此系统根据手术要求,利用膝关节的生理特点,可进行高精度的股骨定位、胫骨定位及手术切割,建立精确的下肢对线。满足了手术安全、环境、消毒等临床医学要求,使HERAS模型在临床手术中的应用前景更加明朗;
     2)针对在线手眼标定中退化运动和噪音对标定精度的影响,提出了根据运动序列自身特点,自适应确定阈值的运动选择算法,提高了在线手眼标定算法的工程实用性,为手眼式机器人在手术实施过程中的安全可靠使用提供了重要保障。
     3)手眼式机器人进行辅助外科手术过程中,需要同时使用多个辅助定位工具并进行多目标运动跟踪。在当前视觉伺服与跟踪技术的基础上,提出了基于多体三焦点张量与直线光流的两种多体运动分割算法。利用直线特征对应进行计算,可解决使用点对应时出现的特征遮挡问题,丰富了计算机视觉领域中多体运动分割技术的理论和方法。
     4)利用基于HERAS模型的机器人辅助全膝关节置换术实验系统-WATO,进行了假骨模型试验、动物尸骨试验、尸体骨实验以及准临床手术实验(尸体实验),进行了详细的精度分析,对各阶段手术实验中遇到的问题提出了解决方法,为临床手术积累了数据与经验。
With the rapid developments of medical theories, surgery operations have become more elaborate and complicated. State-of-art technologies, such as computer, robotics, electronic information and network communication, have been widely applied in medical areas. As a result, robot aided surgery (RAS) is increasingly adopted in modern clinic operations.
     Every year, thousands of patients suffered from joint diseases, such as rheumatoid arthritis or osteoarthritis, needing total knee replacement (TKR) surgery to recover their normal functions. Presently, the positioning of prosthetic components in surgeries mainly depends on clinic experiences of doctors and special surgical guiding devices. To avoid the limitations of jig-based TKR systems, robot/computer assisted surgeries are quickly developed, with the aid of which a better operation precision and surgical quality are well expected.
     However, the classical robot aided surgical systems of TKR has limitations in the operational precisions and twice surgeries. According to the characteristics of the TKR surgery, we designed a hand-eye robot aided surgical system and made it applicable to clinical trials. In this model, both the cameras and the cutting tool are fixed on the end-effecter of the robot. In this way, we get a dynamic navigation in stead of an inadequate static one. In total knee replacement surgeries, the information from fiducial marks helps the surgical robot to automatically determine the position of cutting planes. Using this method, an accurate mechanical axis is established.
     The major contributions of this thesis includes four pars: first, base on HERAS model, we designed and realize a pre-clinical experiment system, which can fulfill the requirement of clinical applications; second, we resolve the problem of online hand-eye calibration; third, we proposed new methods for motion segmentation; and at last, we applied HERAS model to TKR cadaver trials and resolve many problems in these experiments. Detailed descriptions of these contributions are as follows:
     1) Design and realize a pre-clinical experiment system base on HERAS model. In this model, using the information of fiducial marks on the knee, one can establish a femur, tibia coordinate system and get an accurate mechanical axis. The system fulfills the requirements of cadaver trials and is very meaningful toward clinic applications.
     2) In order to guarantee the safety of surgeries, online hand-eye calibration will never be overlooked. After analyzing the traditional method of offline hand-eye calibration and online hand-eye calibration, we proposed a new concept of adaptive motion selection for online hand-eye calibration, which will not only avoid the degenerate cases, but also avoid small rotations that will lead large error in calibration.
     3) In the model of HERAS for TKR, we usually use several assistant guides simultaneously to track multiple motions. Based on the previous works of visual tracking and multibody motion segmentation, we proposed two new methods of segmenting multiple 3D motions from line correspondences--one is based on multibody trifocal tensor, while the other one is based on line optical flow.
     4) Moreover, we employed different materials, such as phantoms, animal bones, human bones and cadaver. We resolved practical problems in these experiments and made accuracy analysis. The experiences and technologies will be very helpful for the clinical surgery in the future.
引文
[1] “NDI's Polaris and Optotrak Certus optical tracking systems,” http://www.ndigital.com/medical.php.
    [2] “Medtronic Inc. StealthStation Treatment Guidance Systems,” http://www.medtronicnavigation.com/customer_support/surgical_support.jsp.
    [3] “Navigation for total knee replacement surgery,” http://www.brainlab.com/scripts/website_full_story.asp?article_id=551&article_type_id=24.
    [4] “Galileo-NAV/Galileo-CAS,” http:// www.pi-ag.ch/pi-systems/ galileo.html.
    [5] “Z-kat digital surgery company,” http://www.z-kat.com/.
    [6] R. Kumar, D. Hager, and H. Taylor, et al., “An Augmentation System for Fine Manipulation,” Proc. Medical Image Computing and Computer Assisted Intervention, vol.1935, pp. 956-967, 2000.
    [7] C. Nikou, “Augmented Reality Image Technology for Orthopaedic Surgery”, J.Operative Techniques in Orthopaedics, vol. 10, no. 1, pp.82-86, 2000.
    [8] Y. Zhu, J.X.Chen, X.d. Fu, and D.Quammen, “A virtual reality system for knee diagnosis and surgery simulation,” IEEE Proc. Virtual Reality, pp. 84, 1999.
    [9] P.A.Heng, C. Y. Cheng, T.T. Wong, Y.S. Xu; Y. P. Chui, K.M. Chan, and S. K. Tso, “A virtual-reality training system for knee arthroscopic surgery”, IEEE Trans. on Information Technology in Biomedicine, vol.8, no. 2, pp.217 – 227, 2004.
    [10] D. T. Gering, A. Nabavi, R. Kikinis, W. Eric L. Grimson, N. Hata, P. Everette, F. Jolesz ,and W. M. Wells III, “An Integrated Visualization System for Surgical Planning and Guidance Using Image Fusion and an Interventional Imaging,” Proc. Medical Image Computing and Computer Assisted Intervention, pp.809-819,1999.
    [11] D. Gering, A. Nabavi, R. Kikinis, N. Hata, L. Odonnell, W. Eric L. Grimson, F. Jolesz, P. Black, and W. M.Wells III., “An Integrated Visualization System for System for Surgical Planning and Guidance Using Image Fusion and an Open MR,” J. Magnetic Resonance Imaging, Vol. 13, pp. 967-975, 2001.
    [12] K. Kansy, A. Schmitgen, M. Bublat, G. Grunst, M. Jungmann,P. Wisskirchen, M. Moche, G. Strauss, C. Trantakis, and T. Kahn, “A Multimodal Navigation System for Interventional MRI,” Proc. Medical Image Computing and Computer Assisted Intervention, pp.1157-1158,2001.
    [13] “Engineering Research Center for Computer Integrated Surgical Systems and Technology,” http://cisstweb.cs.jhu.edu/.
    [14] “Center for Medical Robotics and Computer Assisted Surgery,” Pittsburgh, Pennsylvania, USA. http://www.mrcas.ri.cmu.edu/.
    [15] “Medical Vision Group at the MIT AI Lab,” Cambridge, Massachusetts, USA.http://www.ai.mit.edu/projects/medical-vision/.
    [16] “Surgical Planning Laboratory of Bgham and Women’s Hospital and Harvard Medical School,”Boston, Massachusetts, USA. http: // splweb. bwh.harvard.edu:8000/.
    [17] J. Duncan and G. Gerig (Eds.), “Automatic Detection and Segmentation of Robot-Assisted Surgical Motions,” Proc. Medical Image Computing and Computer Assisted Intervention, pp. 802–810, 2005.
    [18] M. Li, and R. H. Taylor, “Performance of surgical robots with automatically generated spatial virtual fixtures,”IEEE INT CONF ROBOT, pp. 217-222, 2005.
    [19] A.A.Wagner., I.M.Varkarakis, R.E.Link, W.Sullivan, and L.M.Su, “Comparison of surgical performance during laparoscopic radical prostatectomy of two robotic camera holders, EndoAssist and AESOP: A pilot study,” UROLOGY, vol. 68 no.1, pp. 70-74, 2006.
    [20] H. Knoop, H. Peters, and J. Raczkowsky, et al., “Integration of a surgical robot and intraoperative imaging,”INT CONGR SER, pp.595-599, 2005.
    [21] M. R. Treat, S. E. Amory, P. E. Downey, and D. A. Taliaferro, “Initial clinical experience with a partly autonomous robotic surgical instrument server.” SURG ENDOSC, vol.20 no.8, pp. 1310-1314, 2006.
    [22] P.Sauer, K.Kozlowski, and D.Pazderski, et al., “The robot assistant system for surgeon in laparoscopic interventions.” Proc. Fifth International Workshop on Robot Motion and Control, pp.55-62, 2005.
    [23] M.X. Kong, Z.J. Du, L.N. Sun, L.X. Fu, Z.H. Jia, and D.M. Wu, “A Robot-Assisted Orthopedic Telesurgery System.” EMBS, pp. 97- 101, 2005.
    [24] M. Anvari, “Reaching the rural world through robotic surgical programs.European surgery,” Eur. Surgery, vol.37, no.5, pp. 284–292, 2005.
    [25] Z.J. Du, M.N. Wang, and M.X. Kong, et al., “Virtual reality-based teleoperator of robot-assistant setting-bone surgery,” Proc. IEEE Conf.Mechatronics and Automation, vol.1-3, pp.339-344, 2006.
    [26] M.A. Cardin, J.X. Wang, and D. B. Plewes, “A Method to Evaluate Human Spatial Coordination Interfaces for Computer-Assisted Surgery,” Proc. Medical Image Computing and Computer Assisted Intervention, vol.2, pp. 9-16, 2005.
    [27] A.S. Chowdhury, S.M. Bhandarkar, E.W. Tollner, G. Zhang, J.C. Yu and E. Ritter, “A Novel Multifaceted Virtual Craniofacial Surgery Scheme Using Computer Vision,” Proc. Computer Vision for Biomedical Image Applications, pp.146-159, 2005.
    [28] P.J.Stolka, and D Henrich, “Building local maps in surgical robotics,” IEEE/RSJ Int’l Conf. Intelligent Robots and Systems, pp.1209 – 1216, 2005.
    [29] K. Hori, T. Kuroda, H. Oyama, Y. Ozaki, T. Nakamura, and T.Takahashi, “Improving precise positioning of surgical robotic instruments by a three-side-view presentation system on telesurgery,” J. Medical Systems, vol.29, no.6, pp. 661-70, 2005.
    [30] D. Stoyanov, A. Darzi, and G.Z. Yang, “Laparoscope self-calibration for robotic assisted minimally invasive surgery,” Proc. Medical Image Computing and Computer AssistedIntervention, pp.114–121, 2005.
    [31] A. Seichi, K. Takeshita, and S. Nakajima, et al., “Revision cervical spine surgery using transarticular or pedicle screws under a computer-assisted image-guidance system,” J. ORTHOP SCI, vol.10, no.4, pp.385-390, 2005.
    [32] M.S. Kim, J.S. Heo, and J.J. Lee, et al., “Visual tracking algorithm for laparoscopic robot surgery,” Proc. Second Int’l Conf. Fuzzy Systems and Knowledge Discovery, vol. 3614, pp.344-51, 2005.
    [33] K. Tomodaa, H. Murataa, H. Ishimasaa, and J. Yamashitab, “The evaluation of navigation surgery in nose and paranasal sinuses,” Int. J. CARS, pp.311–323, 2006.
    [34] U. V. Jan, L. Kirsch, and O. Rühmann, “Surgical planning for the humeral part of a shoulder endoprosthesis based on landmarks determined from 3D ultrasound volumes,” Int. J. CARS, pp.229–250, 2006.
    [35] J.A. Davila, M.J. Kransdorf, G.P. Duffy, “Surgical planning of total hip arthroplasty: accuracy of computer-assisted EndoMap software in predicting component size,” Skeletal Radiology, vol.35, no.6, pp.390-393, 2006.
    [36] L M.erotic, and G.Z.Yang, “The use of super resolution in robotic assisted minimally invasive surgery,” Proc. Medical Image Computing and Computer Assisted Intervention, vol. 4190, pp.462-469, 2006.
    [37] F. Nageotte, P. Zanne, C. Doignon, and M. Mathelin, “Visual Servoing-Based Endoscopic Path Following for Robot-Assisted Laparoscopic Surgery.” Proc. IEEE Int’l Conf. Intelligent Robots and Systems, pp.2364-2369, 2006.
    [38] F.H. Shi, J. Z., Y.C. Liu, Z.J. Zhao, “A Hand-Eye Robotic Model for Total Knee Replacement Surgery,” Proc. Medical Image Computing and Computer Assisted Intervention, pp. 122-130, 2005.
    [39] F.Corcione, C.Esposito, and D.Cuccurullo, et al., “Advantages and limits of robot-assisted laparoscopic surgery: Preliminary experience,” Surgical endoscopy, vol.19, no.11, pp. 117-119, 2005.
    [40] J. Leven, D. Burschka, R. Kumar, and G. Zhang, et.al. “DaVinci canvas: a telerobotic surgical system with integrated, robot-assisted, laparoscopic ultrasound capability,” Proc. Medical Image Computing and Computer Assisted Intervention, pp.811-818, 2005.
    [41] A. Ayav, L. Bresler, and J. Hubert, et al., “Robotic-assisted pelvic organ prolapse surgery,” SURG ENDOSC, vol.19, no.9, pp.1200-1203, 2005.
    [42] C. Plaskos, P. Cinquin, and A.J. Hodgson, et al., “Safety and accuracy considerations in developing a small sterilizable robot for orthopaedic surgery,” Proc.IEEE Int’l Conf. ROBOT, pp.930-935, 2005.
    [43] M.P. Esposito, P. Ilbeigi, and M. Ahmed, et al., “Use of fourth arm in da Vinci robot-assisted extraperitoneal laparoscopic prostatectomy: Novel technique,” UROLOGY, vol.66, no.3, pp. 649-652, 2005.
    [44] M. Tavakoli, R. V. Patel, and M. Moallem, “A haptic interface for computer-integrated endoscopic surgery and training,” Virtual Reality, vol. 9, pp. 160–176, 2006.
    [45] Y. H. Kim, and S. G. Lee, “Prototype of external fixation robot for fracture surgery,” 1st IEEE RAS & EMBS Int’l Conf. Biomedical Robotics and Biomechatronics, pp.198- 200, 2006.
    [46] J.X. Yang, J.W. Qian, and J. Liang, et al., “Surgical navigation robot based on binocular stereovision,” Proc.IEEE Int’l Conf. Mechatronics and Automation, vol.1-3, pp.2378-2382, 2006.
    [47] Y.H. Woo, S.S. Kim, J.W. Chung, “CSCW Telemedicine & Distance Education System on the Internet,” Proc. of IEEE Int’l Conf. Systems, Man, and Cybernetics, vol.3, pp. 1729 -1733,2001.
    [48] Y. Kim, and J.H. Choi, et al., “Collaborative Surgical Simulation Over the Internet,” IEEE Internet Computing, vol.5, no.3, pp. 65 -73, 2001.
    [49] T. Ohba, K. Komoriya, K. Matsuhira, et al., “Remote Coordinated Controls in Multiple Telerobot Cooperation Robotics and Automation,” Proc.ICRA, vol.4, pp. 3138-3143, 2000.
    [50] M. Mitsuishi, T. Watanabe, H. Nakanishi, T. Hori, H. Watanabe, and B.Kramer, “A telemicrosurgery system with colocated view and operation points and rotational-force-feedback-free master manipulator,” Proc. 2nd Int’l. Symp. Medical Robotics and Computer Assisted Surgery, pp. 111–118, 1995.
    [51] M. Mitsuishi, S. I. Warisawa, T. Tsuda, T. Higuchi, N. Koizumi, H. Hashizume, and K. Fujiwara, “Remote ultrasound diagnostic system,” Proc. IEEE Conf. Robotics and Automation, pp.1567–1574,2001.
    [52] Y. Takahashi, H.Goto, and T Saito, “Health care system using face robot and tele-operation via Internet”, Proc. of 7th Int’l Conf’ on Control, Automation, Robotics and Vision, vol. 2, pp.1025 -1030, 2002.
    [53] B.L. Davies, “A Review of Robotics in Surgery,” Proc. Instn Mech Engrs (H), pp. 129-140, 2000.
    [54] R. H. Taylor and D. Stoianovici, “Medical Robotics in Computer- Integrated Surgery,” IEEE Trans. Robotics and Automation, vol.19, no.5, pp.765-781, 2003.
    [55] J.M. Sackier, and Y. Wang, “Robotically Assisted Laparoscopic Surgery: from Concept to Development,” Computer-Integrated Surgery, pp. 577-580, 1996.
    [56] I.W. Hunter, L. A. Jones, M. A. Sagar, S. R. Lafontaine, and P. J. Hunter, “Ophthalmic microsurgical robot and associated virtual environment,” Comput. Biol. Med, vol. 25, pp. 173–182, 1995.
    [57] M. Misuishi, H. Watanabe, H. Nakanishi, H. Kubota, and Y. IIzuka, “Dexterity enhancement for a tele-microsurgery system with multiple macro-micro colocated operation point manipulators and understanding of the operator’s intention,” Proc. 1st Joint Conf. Computer Vision, Virtual Reality and Robotics in Medicine and Medical Robotics and Computer-Assisted Surgery, pp. 821–830, 1997.
    [58] P. S. Schenker and S. T. Charles, “Development of a telemanipulator for dexterity enhanced microsurgery,” Proc. 2nd Int. Symp. Medical Robotics and Computer AssistedSurgery, pp.81–88, 1995.
    [59] 杜志江,孙立宁,富历新, “医疗机器人发展概况综述,” 机器人, vol.25, no.2, pp.182-187, 2003.
    [60] H.A.Paul, B.Mittlestadt, W.L.Bargar, B.Musits, and R. H.Taylor, et al., “A surgical robot for total hip replacement surgery”. Proc. IEEE Int’l Conf. Robotics and Automation, vol.1, pp.606-611, 1992.
    [61] R. H. Taylor,H.A. Paul, P. Kazandzides, B. D. Mittelstadt,W. Hanson, J. F. Zuhars, B.Williamson, B. L. Musits, E. Glassman, and W. L. Bargar, “An image-directed robotic system for precise orthopaedic surgery,” IEEE Trans. On Robotics and Automation, vol. 10, pp. 261-275, 1994.
    [62] P. Ballester, Y. Jain, and K. R. Haylett, “Compuarison of Task Performance of Robotic Camera Holders EndoAssist and Aesop,” Int’l Congress Series, vol.1230, pp. 1100-1103, 2001.
    [63] G. S. Guthart and J. K. Salisbury, “The intuitive telesurgery system: Overview and application,” Proc. IEEE Int’l Conf. Robotics and Automation, pp. 618–621, 2000.
    [64] B.Preising, T.C.Hsia, B.Mittelstadt, “A literature review: robots in medicine”. IEEE Engineering in Medicine and Biology Magazine, vol.10, no. 2, pp.13-22, 1991.
    [65] K. Cleary and C. Nguyen, “State of the art in surgical robotics: Clinical applications and technical challenges,” Comput.-Aided Surgery, vol. 6, no. 6, pp. 312-328, 2001.
    [66] A. R. Lanfranco, A. E. Castellanos, J. P. Desai, and W. C. Meyers, “Robotic Surgery: A Current Perspective”, Annals of Surgery, vol.239, no.1, pp.14-21, 2004,.
    [67] 王田苗,田增民, “立体定向脑外科机器人集成研究,” 世界医疗器械,, vol.3, no.9, pp.30-35, 1997.
    [68] http://database.cpst.net.cn/popul/event/artic/31129190811.html.
    [69] http://news.xinhuanet.com/st/2003-09/10/content_1074225.htm.
    [70] A. Mehrabi, C. L. Yetimoglu, and A. Nickkholgh, et al., “Development and evaluation of a training module for the clinical introduction of the da Vinci robotic system in visceral and vascular surgery,” SURG ENDOSC, vol.20, no.9, pp.1376-1382, 2006.
    [71] A. L. Rawlings, J. H. Woodland, and D. L. Crawford, et al., “Telerobotic surgery for right and sigmoid colectomies: 30 consecutive cases,” SURG ENDOSC, vol.20, no.11, pp. 1713-1718, 2006.
    [72] J.B. Field, M. F. Benoit, T.A. Dinh and C. Diaz-Arrastia, et al., “Computer-enhanced robotic surgery in gynecologic oncology,” Surg Endosc, vol.21, no.2, pp.244-6, 2007.
    [73] A. Malvisi, M. Marcacci, S. Martelli; G. Campion, and P. Fiorini, “A Robotic System for Total Knee Replacement,” Proc. of IEEE/ASME Int’l Conf. Advanced Intelligent Mechatronics, vol.2, pp.1047-1052, 2001..
    [74] 吴海山等, “全膝关节置换外科教程,” 上海长征骨科医院—关节外科专业中心.2001.
    [75] 项良碧,祖启明,龚旭生, “人工全膝关节置换术的进展,” 现代康复, vol.5, no.1, pp.5-9, 2001.
    [76] A. M. DiGioia, B. Jaramaz, F. Picard, and L. P. Nolte, “Computer and Robotic Assisted Hip and Knee Surgery,” Oxford University Press, 2004.
    [77] G. Chen, Y. TA, M. Kessler and S. Pitluck, “Structure Transfer Between Sets of Three Dimensional Medical Imaging Data,” Computer Graphics, pp.171-175, 1985.
    [78] M.Sigh, R. R. Brechner and V. W. Henderson, “Neuromagnetic Localization using Magnetic Resonance Images,” IEEE Trans Med. Imaging, vol. 11, no. 1, pp. 129-134, 1992.
    [79] R. Mosege, et al., “Computer Assisted Surgery: an Innovation Surgical Technique in Clinic Routine,” Computer Assisted Radiology, pp. 413-415,1989.
    [80] R. H. Taylor, et al., “Augmentation of Human Precision in Computer-Integrated Surgery,” ITBM: Innovation Tech. Biol. Med. (special issue), vol. 13. no. 4, pp. 450-468,1992.
    [81] U.Wiesel, A. Lahmer,M. Tenbusch, and M. Borner, “Total knee replacement using the ROBODOC system,” Proc. 1st Int’l Annu. Meeting CAOS, p. 88, 2001.
    [82] T.C.Kienzle, S.D.Stulberg, M.Peshkin, A.Quaid, J.Lea, A.Goswani, and C.H.Wu, “Total Knee Replacement,” IEEE Engineering in Medicine and Biology Magazine, vol.14, no.3, pp. 301-306, 1995 .
    [83] W. Siebert, S. Mai, R. Kober, and P.F. Heeckt. “Technique and first clinical results of robot-assisted total knee replacement,” The Knee, vol.9, no. 3, pp. 173-180,2002.
    [84] W. J.Viant, et al., “A Computer Aassisted Orthopaedic Surgical System for Distal Locking of Intrameduallary Nails,” Proc. Instn. Mech. Engrs., Part H, Journal of Engineering in Medicine, pp. 293-300, 1997.
    [85] R. Hofstetter, “Fluoroscopy Based Surgical Navigation for Femoral Fracture Fixation,” Second Computer-Assisted Orthopaedic Surgery Symposium, pp.7-9, 1996.
    [86] I. Browbank, K. Bouazza-Marouf, and Schnabler. J., “Robotic-Assisted Internal Fixation of Hip Fractures: a Fluoroscopy-Based Intraoperative Registration Technique,” Proc. Instn. Mech. Engrs., Part H, Journal of Engineering in Medicine, vol. 214, no. H2, pp.165-179, 2000.
    [87] S.C. Ho, R.D. Hibberd, and B.L. Davies. “Robot Assisted Knee Surgery”. IEEE Engineering in Medicine and Biology Magazine, vol.14, no.3, pp. 292-300, 1995.
    [88] M. Jakopec, F. R. Y. Baena, S. J. Harris, P. Gomes, J. Cobb, and B. L. Davies, “The Hand-On Orthopaedic Robot ‘Acrobot’: Early Clinical Trials of Total Knee Replacement Surgery”. IEEE Trans. On Robotics and Automation, vol.19, no.5, pp.902-911, 2003.
    [89] R.S.Laskin, “Session IV: New Techniques and Concepts in Total Knee Replacement”. Clinical Orthopaedics & Related Research, vol.1, no.416, pp.151-153, 2003.
    [90] K.Sundaraj, C.Laugier, and F.Boux de Casson, “Intra-operative ct-free examination system for anterior cruciate ligament reconstruction”. Proc. IEEE/RSJ Int’l Conf. Intelligent Robots and Systems, vol. 3, pp. 2829-2834, 2003.
    [91] 骆文博,王广志,丁海曙,“计算机辅助手术系统,”国外医学生物医学工程分册,,vol.24, no.6, pp.241-256, 2001.
    [92] Z.Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal.Mach. Intell. , vol.22, no.11, pp.1330–1334, 2000.
    [93] J. Bouguet, “Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/ bouguetj/calib_doc/index.html.
    [94] Y C.Shiu, and S.Ahmad, “Calibration of wrist-mounted robotic sensors by solving homogeneous transform equations of the form AX=XB”. IEEE Transactions on Robotics and Automation, vol.5, no.1, pp.16-27, 1989.
    [95] K. Daniilidis. “Hand-eye calibration using dual quaternions”. Int. J. Robot. Res., vol.18, no.3, pp.286-298, 1999.
    [96] S. Warisawa, H. Kawano, and M. Mitsuishi, “Development of a robotic surgical System for total knee joint replacement,” Proc. IROS, pp.1427-1432, 2002.
    [97] M. Nogler, A. Nogler, and C. Wimmer, et al., “Primary stability of a robodoc implanted anatomical stem versus manual implantation,” Clinical Biomechanics, pp.123-139, 2004.
    [98] G. B. Chung, and S. G. Lee, et al., “A Robot-Assisted Surgery System for Spinal Fusion,” Proc. IEEE/RSJ Int’l Conf. Intelligent Robots and Systems, pp. 3015- 3021 2005.
    [99] P. Knappe, I. Gross, S. Pieck, and J. Wahrburg, et al., “Position control of a surgical robot by a navigation system,” Proc. IEEE/RSJ Int’l Conf. Intelligent Robots and Systems, ,pp.3350-3354, 2003.
    [100] J. Zhang, F.H. Shi, Y.C. Liu. “Adaptive Motion Selection for Online Hand-Eye Calibration”. Robotica, 2007.
    [101] 马颂德,张正友, “计算机视觉,” 科学出版社,1998.
    [102] R. Y. Tsai and R. K. Lenz, “A new technique for fully autonomous and efficient 3d robotics hand/eye calibration”, IEEE Trans. Robot. Automat, vol. 5, pp. 345-358, 1989.
    [103] C. Wang. “Extrinsic calibration of a robot sensor mounted on a robot”. IEEE Trans. Robot. Automat. , vol.8, no. 2, pp.161-175, 1992.
    [104] H. Zhuang and Y. C. Shiu, “A Noise-Tolerant Algorithm for Robotic Hand-Eye Calibration With or Without Sensor Orientation Measurement”. IEEE Trans. on System, Man and Cybernetics, vol.23, no. 4, pp.1168-1175, 1993.
    [105] J. C. K. Chou and M. Kamel, “Finding the position and orientation of a sensor on a robot manipulator using quaternions,” Int. J. Robot. Res., vol. 10, no. 3, pp. 240–254, 1991.
    [106] H. Chen. “A screw motion approach to uniqueness analysis of head-eye geometry,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 145-151, 1991.
    [107] R. Horaud and F. Dornaika, “Hand–eye calibration,” Int. J. Robot. Res., vol. 14, no.3, pp.195–210, 1995.
    [108] H.Malm, and A Heyden,. “A new approach to hand-eye calibration,” Proc. 15th Int’l. Conf. Pattern Recognition, vol. 1, pp. 525-529, 2000.
    [109] S. Remy, M. Dhome, J. Lavest, and N. Daucher. “Hand-eye calibration”. Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 1057-1065, 1997.
    [110] S. Ma, “A self-calibration technique for active vision systems,” IEEE Trans. Robot. Automat. , vol. 12, no.1, pp.114-120, 1996.
    [111] G. Wei, K. Arbter, and G. Hirzinger. “Active self-calibration of robotic eyes and hand-eye relationships with model identification,” IEEE Trans. Robot. Automat. , vol.14, no.1, pp.158-166, 1998.
    [112] H. Malm, and A. Heyden, “Simplified intrinsic camera calibration and hand-eye calibration for robot vision,” Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, vol.1, pp. 1037-1043, 2003.
    [113] J. Angeles, G. Soucy and F. P. Ferrie, “The online solution of the hand-eye problem,” IEEE Trans. Robot. Automat. vol. 16, pp. 720-731, 2000.
    [114] N. Andreff, R. Horaud and B. Espiau, “On-line hand-eye calibration”, Proc. Int’l Conf. 3-D Digital Imaging and Modeling, pp. 430 – 436, 1999.
    [115] F. H. Shi, J. H. Wang, Y. C. Liu, “An Approach to Improve Online Hand-Eye Calibration,” Proc.IbPRIA, 647-655, pp.2005.
    [116] D.Singaraju and R.Vidal, “A bottom up algebraic approach to motion segmentation,” Proc. Asian Conf. Computer Vision, pp. 286–296, 2006.
    [117] R.Vidal and D.Singaraju, “A closed-form solution to direct motion segmentation,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 510–515, 2005.
    [118] R.Vidal and A.Ravichandran, “Optical flow estimation and segmentation of multiple moving dynamic textures,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 516 – 521, 2005.
    [119] X.Fan and R.Vidal, “The space of multibody fundamental matrices: Rank, geometry and projection,” Proc.IEEE Int’l Conf. Computer Vision, pp.7-25, 2005.
    [120] J.Yan and M.Pollefeys, “Articulated motion segmentation using ransac with priors,” Workshop on Dynamical Vision 2005 (in conjunction with ICCV'05), 2005.
    [121] J.Yan and M.Pollefeys. Chetverikov, “Robust 3d segmentation of multiple moving objects underweak perspective,” Workshop on Dynamical Vision 2005 (in conjunction with ICCV'05), 2005.
    [122] F.H. Shi, J.H. Wang, J. Zhang, and Y.C. Liu, “Motion Segmentation of Multiple Translating Objects Using Line Correspondences,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol.1, pp.315-320,2005.
    [123] R. Hartley, and A.Zisserman, “Multiple View Geometry in Computer Vision,” Cambridge Press, 2000.
    [124] R. Hartley and R. Vidal, “The Multibody Trifocal Tensor: Motion Segmentation from 3 Perspective Views,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol.1, pp.769-775, 2004.
    [125] O.Faugeras, N. Navab, and R.Deriche, “On the information contained in the motion field of lines and the cooperation between motion and stereo,” Int’l J. Imaging Systems and Technology, vol.2, 1991.
    [126] R. Vidal and Y. Ma, “A unified algebraic approach to 2-d and 3-d motion segmentation,” Proc.European Conf. Computer Vision, pp.1–15, 2004.
    [127] X. Feng and P. Perona, “Scene segmentation from 3D motion,” Proc. IEEE Conf.Computer Vision and Pattern Recognition, pp. 225-231, 1998.
    [128] P. H. S. Torr, “Geometric motion segmentation and model selection,” Phil. Trans. Royal Society of London A, vol.356, no.1740, pp.1321-1340, 1998.
    [129] P. Torr, R. Szeliski, and P. Anandan, “An integrated Bayesian approach to layer extraction from image sequences,” IEEE Trans. Pattern Anayysis and Machine Intelligence, vol.23, no.3, pp.297-303, 2001.
    [130] M. Han and T. Kanade, “Reconstruction of a scene with multiple linearly moving objects,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol.2, pp. 542-549,2000.
    [131] R. Vidal, Y. Ma, and S. Sastry, “Generalized principal component analysis (GPCA)”, Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol.1, pp. 621-628, 2003.
    [132] P. Sturm., “Structure and Motion for Dynamic Scenes - The Case of Points Moving in Planes,” Proc.European Conf. Computer Vision, vol. 2, pp. 867-882, 2002.
    [133] L. Wolf and A. Shashua, “Two-body segmentation from two perspective views’” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 263-270, 2001.
    [134] R. Vidal and S. Sastry, “Optimal segmentation of dynamic scenes from two perspective views,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol.1, pp. 281-286.2003.
    [135] J. Costeira and T. Kanade, “A multibody factorization method for independently moving objects,” Int’l J. Computer Vision, vol.29, no.3, pp.159-179, 1998.
    [136] K. Kanatani, “Motion segmentation by subspace separation and model selection,” Proc.Int’l Conf. Computer Vision, vol.2, pp. 586-591, 2001.
    [137] A. Shashua and L. Wolf, “Homography tensors: on algebraic entities that represent 3 views of static or moving planar points,” Proc.European Conf. Computer Vision, pp. 507-521, 2000.
    [138] A.Shashua and A.Levin, “Multi-frame infinitesimal motion model for the reconstruction of (dynamic) scenes with multiple linearly moving objects,” Proc.Int’l Conf. Computer Vision, vol.2, pp. 592-599, 2001.
    [139] A. Gruber and Y. Weiss, “Multibody Factorization with Uncertainty and Missing Data Using the EM Algorithm,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol.1, pp. 707-714, 2004.
    [140] R. Vidal and R. Hartley, “Motion segmentation with missing data using PowerFactorization and GPCA,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol.2, pp. 310-316, 2004.
    [141] Z. Fan, J. Zhou and Y. Wu, “Inference of multiple subspaces from high-dimensional data and application to multibody grouping,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 661-666, 2004.
    [142] Y. Liu, “Rigid object motion estimation from intensity images using straight line correspondences,” Ph.D. Thesis, University of Illinois at Urbana-Champaign, 1990.
    [143] J. Weng, T. Huang, and N. Ahuja, “Motion and structure from line correspondences:Closed-form solution, uniqueness, and optimization,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.14, no.3, pp. 318-336, 1992.
    [144] A. Bartoli and P. Sturm, “The 3D line motion matrix and alignement of line reconstructions,” Int’l J. Computer Vision, vol.57, no.3, pp.159-178, 2004.
    [145] T. S. Huang, “Motion analysis. Encyclopedia of Artificial Intelligence,” Wiley, 1987.
    [146] J. Zhang, F.H. Shi, and Y.C. Liu, “Motion Segmentation by Multibody Trifocal Tensor Using Line Correspondences”. Proc. 18th Int’l Conf. Pattern Recognition (ICPR’06), vol. 01, pp.599 – 602, 2006.
    [147] J. Zhang, F.H. Shi, and Y.C. Liu, “An Adaptive Selection of Motion for Online Hand-Eye Calibration”. Proc. 18th Australian Joint Conf. Artificial Intelligence (AI2005), pp. 520-529, 2005.
    [148] S.F.Jiang, Z.Chen, and S.H.Yang, “Reconstructing 3d rotation motion parameters from 2d image straight-line optical flow,” J.Nanchang Institute of Aeronautical Technology (Natural Science), vol.20, no.4, pp.5-8, 2005.
    [149] J. Zhang, F.H. Shi, J.H.Wang, and Y.C. Liu, “3D Motion Segmentation from Straight-Line Optical Flow”. Int’l Workshop Multimedia Content Analysis and Mining (MCAM'07), 2007.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700