文摘
Recent advances of deep learning have achieved remarkable performances in various computer vision tasks including weakly supervised object localization. Weakly supervised object localization is practically useful since it does not require fine-grained annotations. Current approaches overcome the difficulties of weak supervision via transfer learning from pre-trained models on large-scale general images such as ImageNet. However, they cannot be utilized for medical image domain in which do not exist such priors. In this work, we present a novel weakly supervised learning framework for lesion localization named as self-transfer learning (STL). STL jointly optimizes both classification and localization networks to help the localization network focus on correct lesions without any types of priors. We evaluate STL framework over chest X-rays and mammograms, and achieve significantly better localization performance compared to previous weakly supervised localization approaches.