文摘
A vision-based control methodology is presented in this paper that can perform accurate, three-dimensional (3D), positioning and path-tracking tasks. Tested with the challenging manufacturing task of welding in an unstructured environment, the proposed methodology has proven to be highly reliable, consistently achieving terminal precision of 1mm. A key limiting factor for this high precision is camera–space resolution per unit physical space. This paper also presents a means of preserving and even increasing this ratio over a large region of the robot's workspace by using data from multiple vision sensors.