文摘
Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However,manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research,sponsored by the Air Force Research Laboratory's Sensors Directorate,successfully developed an automated approach to fuse high-resolution RGB imagery,lidar data,and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.