Transductive Transfer Machine
详细信息    查看全文
  • 作者:Nazli Farajidavar (17)
    Teofilo de Campos (17)
    Josef Kittler (17)

    17. CVSSP
    ; Univeristy of Surrey ; Guildford ; Surrey ; GU2 7XH ; UK
  • 刊名:Lecture Notes in Computer Science
  • 出版年:2015
  • 出版时间:2015
  • 年:2015
  • 卷:9005
  • 期:1
  • 页码:623-639
  • 全文大小:474 KB
  • 参考文献:1. Pan, SJ, Yang, Q (2010) A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22: pp. 1345-1359 CrossRef
    2. Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2011)
    3. FarajiDavar, N., deCampos, T., Kittler, J.: Adaptive transductive transfer machines. In: Proceedings of the British Machine Vision Conference (BMVC), Nottingham (2014)
    4. Cortes, C, Mohri, M, Riley, MD, Rostamizadeh, A Sample selection bias correction theory. In: Freund, Y, Gy枚rfi, L, Tur谩n, G, Zeugmann, T eds. (2008) Algorithmic Learning Theory. Springer, Heidelberg, pp. 38-53 CrossRef
    5. Gretton, A, Smola, A, Huang, J, Schmittfull, M, Borgwardt, K, Sch枚lkopf, B (2009) Covariate shift by kernel mean matching. Dataset Shift Mach. Learn. 3: pp. 131-160
    6. Joachims, T.: Transductive inference for text classification using support vector machines. In: Proceedings of the International Conference Machine Learning, ICML, San Francisco, CA, USA, pp. 200?09 (1999)
    7. Yang, J., Yan, R., Hauptmann, A.G.: Cross-domain video concept detection using adaptive. In: Proceedings of the 15th International Conference on Multimedia (2007). doi:10.1145/1291233.1291276
    8. Gopalan, R, Li, R, Chellappa, R (2014) Unsupervised adaptation across domain shifts by generating intermediate data representations. IEEE Trans. Pattern Anal. Mach. Intell. PAMI 36: pp. 2288-2302 CrossRef
    9. Chu, W.S., De la Torre, F., Cohn, J.F.: Selective transfer machine for personalized facial action unit detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2013)
    10. Dai, W., Chen, Y., Xue, G., Yang, Q., Yu, Y.: Translated learning: transfer learning across different feature spaces. In: Neural Information Processing Systems, pp. 353?60 (2008)
    11. Borgwardt, K.M., Gretton, A., Rasch, M.J., Kriegel, H., Schalkopf, B., Smola, A.J.: Integrating structured biological data by kernel maximum mean discrepancy. In: Proceedings of the International Conference Intelligent Systems for Molecular Biology (2006)
    12. Long, M., Wang, J., Ding, G., Yu, P.: Transfer learning with joint distribution adaptation. In: Proceedings of the International Conference on Computer Vision, ICCV (2013)
    13. Aytar, Y., Zisserman, A.: Tabula rasa: model transfer for object category detection. In: Proceedings of the International Conference on Computer Vision, ICCV (2011)
    14. Gretton, A., Borgwardt, K., Rasch, M., Scholkopf, B., Smola, A.: A kernel method for the two sample problem. In: Proceedings of the Neural Information Processing Systems, NIPS, pp. 513?20. MIT Press (2007)
    15. Sun, Q., Chattopadhyay, R., Panchanathan, S., Ye, J.: A two-stage weighting framework for multi-source domain adaptation. In: Proceedings of the Neural Information Processing Systems, NIPS, pp. 505?13 (2011)
    16. Arnold, A., Nallapati, R., Cohen, W.W.: A comparative study of methods for transductive transfer learning. In: Proceedings of the Seventh IEEE International Conference on Data Mining Workshops, ICDMW, pp. 77?2. IEEE Computer Society, Washington (2007)
    17. FarajiDavar, N., deCampos, T., Kittler, J., Yan, F.: Transductive transfer learning for action recognition in tennis games. In: VECTaR Workshop, in Conjunction with ICCV (2011)
    18. Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. In: Proceedings of the 21st International Joint Conference on Artificial Intelligence, pp. 1187?192. Morgan Kaufmann Publishers Inc., San Francisco (2009)
    19. Reynolds, DA, Quatieri, TF, Dunn, RB (2000) Speaker verification using adapted gaussian mixture models. Digit. Sig. Process. 10: pp. 19-41 CrossRef
    20. Chen, M., Weinberger, K.Q., Blitzer, J.: Co-training for domain adaptation. In: Proceedings of the Neural Information Processing Systems, NIPS, pp. 2456?464 (2011)
    21. Quanz, B., Huan, J., Mishra, M.: Knowledge transfer with low-quality data: a feature extraction issue. In: Abiteboul, S., Bolhm, K., Koch, C., Tan, K. (eds.) Proceedings of the 27th International Conference on Data Engineering (ICDE), pp. 769?79. IEEE Computer Society, Hannover, Germany (2011)
    22. Zhong, E., Fan, W., Peng, J., Zhang, K., Ren, J., Turaga, D.S., Verscheure, O.: Cross domain distribution adaptation via kernel mapping. In: International Conference on Knowledge Discovery and Data mining, KDD, pp. 1027?036. ACM (2009)
    23. Shimodaira, H (2000) Improving predictive inference under covariate shift by weighting the log-likelihood function. J. Stat. Plan. Infer. 90: pp. 227-244 CrossRef
    24. Cun, YL, Boser, B, Denker, JS, Howard, RE, Habbard, W, Jackel, LD, Henderson, D Handwritten digit recognition with a back-propagation network. In: Touretzky, DS eds. (1990) Advances in Neural Information Processing Systems. Morgan Kaufmann Publishers Inc., San Francisco, pp. 396-404
    25. Nene, S.A., Nayar, S.K., Murase, H.: Columbia university image library COIL-20 (1996). http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php (retrieved 30 June 2014)
    26. Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 2066?073 (2012)
    27. Bay, H, Ess, A, Tuytelaars, T, Gool, L (2008) Speeded-up robust features (SURF). Comput. Vis. Image Underst. 110: pp. 346-359 CrossRef
    28. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the Neural Information Processing Systems, NIPS, pp. 1106?114 (2012)
    29. Pan, SJ, Tsang, IW, Kwok, JT, Yang, Q (2011) Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22: pp. 199-210 CrossRef
    30. Si, S, Tao, D, Geng, B (2010) Bregman divergence-based regularization for transfer subspace learning. IEEE Trans. Knowl. Data Eng. 22: pp. 929-942 CrossRef
    31. Gopalan, R., Li, R., Chellappa, R.: Domain adaptation for object recognition: an unsupervised approach. In: Metaxas, D.N., Quan, L., Sanfeliu, A., Gool, L.J.V. (eds.) Proceedings of the International Conference on Computer Vision, ICCV, pp. 999?006 (2011)
    32. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DeCAF: A deep convolutional activation feature for generic visual recognition. Technical report CoRR arXiv:1310.1531, Cornell University Library (2013)
    33. Duan, L., Tsang, I.W., Xu, D., Chua, T.: Domain adaptation from multiple sources via auxiliary classifiers. In: Proceedings of the 26th Annual International Conference on Machine Learning, ICML, pp. 289?96. ACM, New York (2009)
    34. Chen, L., Li, W., Xu, D.: Recognizing RGB images by learning from RGB-D data. In: IEEE International Conference on Computer Vision and Pattern Recognition, CVPR (2014)
    35. Gao, J., Fan, W., Jiang, J., Han, J.: Knowledge transfer via multiple model local structure mapping. In: Knowledge Discovery and Data Mining, pp. 283?91 (2008)
  • 作者单位:Computer Vision -- ACCV 2014
  • 丛书名:978-3-319-16810-4
  • 刊物类别:Computer Science
  • 刊物主题:Artificial Intelligence and Robotics
    Computer Communication Networks
    Software Engineering
    Data Encryption
    Database Management
    Computation by Abstract Devices
    Algorithm Analysis and Problem Complexity
  • 出版者:Springer Berlin / Heidelberg
  • ISSN:1611-3349
文摘
We propose a pipeline for transductive transfer learning and demonstrate it in computer vision tasks. In pattern classification, methods for transductive transfer learning (also known as unsupervised domain adaptation) are designed to cope with cases in which one cannot assume that training and test sets are sampled from the same distribution, i.e., they are from different domains. However, some unlabelled samples that belong to the same domain as the test set (i.e. the target domain) are available, enabling the learner to adapt its parameters. We approach this problem by combining three methods that transform the feature space. The first finds a lower dimensional space that is shared between source and target domains. The second uses local transformations applied to each source sample to further increase the similarity between the marginal distributions of the datasets. The third applies one transformation per class label, aiming to increase the similarity between the posterior probability of samples in the source and target sets. We show that this combination leads to an improvement over the state-of-the-art in cross-domain image classification datasets, using raw images or basic features and a simple one-nearest-neighbour classifier.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700