On the Hardness of Domain Adaptation and the Utility of Unlabeled Target Samples
详细信息    查看全文
  • 作者:Shai Ben-David (1) shai@cs.uwaterloo.ca
    Ruth Urner (1) rurner@cs.uwaterloo.ca
  • 关键词:Statistical Learning Theory – ; Domain Adaptation – ; Sample Complexity – ; Unlabeled Data
  • 刊名:Lecture Notes in Computer Science
  • 出版年:2012
  • 出版时间:2012
  • 年:2012
  • 卷:7568
  • 期:1
  • 页码:139-153
  • 全文大小:245.7 KB
  • 参考文献:1. Mansour, Y., Mohri, M., Rostamizadeh, A.: Domain adaptation: Learning bounds and algorithms. In: COLT (2009)
    2. Cortes, C., Mansour, Y., Mohri, M.: Learning bounds for importance weighting. In: Lafferty, J., Williams, C.K.I., Shawe-Taylor, J., Zemel, R., Culotta, A. (eds.) Advances in Neural Information Processing Systems 23, pp. 442–450 (2010)
    3. Sugiyama, M., Krauledat, M., M眉ller, K.R.: Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research 8, 985–1005 (2007)
    4. Tsuboi, Y., Kashima, H., Hido, S., Bickel, S., Sugiyama, M.: Direct density ratio estimation for large-scale covariate shift adaptation. Journal of Information Processing 17, 138–155 (2009)
    5. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., Vaughan, J.W.: A theory of learning from different domains. Machine Learning 79(1-2), 151–175 (2010)
    6. Ben-David, S., Shalev-Shwartz, S., Urner, R.: Domain adaptation–can quantity compensate for quality? In: ISAIM (2012)
    7. Huang, J., Gretton, A., Sch枚lkopf, B., Smola, A.J., Borgwardt, K.M.: Correcting sample selection bias by unlabeled data. In: NIPS. MIT Press (2007)
    8. Sugiyama, M., M眉ller, K.: Generalization error estimation under covariate shift. In: Workshop on Information-Based Induction Sciences (2005)
    9. Cortes, C., Mohri, M., Riley, M., Rostamizadeh, A.: Sample Selection Bias Correction Theory. In: Freund, Y., Gy枚rfi, L., Tur谩n, G., Zeugmann, T. (eds.) ALT 2008. LNCS (LNAI), vol. 5254, pp. 38–53. Springer, Heidelberg (2008)
    10. Kifer, D., Ben-David, S., Gehrke, J.: Detecting change in data streams. In: VLDB, pp. 180–191 (2004)
    11. Cortes, C., Mohri, M.: Domain Adaptation in Regression. In: Kivinen, J., Szepesv谩ri, C., Ukkonen, E., Zeugmann, T. (eds.) ALT 2011. LNCS, vol. 6925, pp. 308–323. Springer, Heidelberg (2011)
    12. Ben-David, S., Lu, T., Luu, T., P谩l, D.: Impossibility theorems for domain adaptation. In: AISTATS, vol. 9, pp. 129–136 (2010)
    13. Kelly, B.G., Tularak, T., Wagner, A.B., Viswanath, P.: Universal hypothesis testing in the learning-limited regime. In: IEEE International Symposium on Information Theory (ISIT) (2010)
    14. Batu, T., Fortnow, L., Rubinfeld, R., Smith, W.D., White, P.: Testing closeness of discrete distributions. CoRR abs/1009.5397 (2010)
    15. Haussler, D., Welzl, E.: Epsilon-nets and simplex range queries. In: Proceedings of the Second Annual Symposium on Computational Geometry, SCG 1986, pp. 61–71. ACM, New York (1986)
    16. Ben-David, S., Litman, A.: Combinatorial variability of vapnik-chervonenkis classes with applications to sample compression schemes. Discrete Applied Mathematics 86(1), 3–25 (1998)
    17. Ben-David, S.: Private communication (2011)
  • 作者单位:1. School of Computer Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
  • ISSN:1611-3349
文摘
The Domain Adaptation problem in machine learning occurs when the test and training data generating distributions differ. We consider the covariate shift setting, where the labeling function is the same in both domains. Many works have proposed algorithms for Domain Adaptation in this setting. However, there are only very few generalization guarantees for these algorithms. We show that, without strong prior knowledge about the training task, such guarantees are actually unachievable (unless the training samples are prohibitively large). The contributions of this paper are two-fold: On the one hand we show that Domain Adaptation in this setup is hard. Even under very strong assumptions about the relationship between source and target distribution and, on top of that, a realizability assumption for the target task with respect to a small class, the required total sample sizes grow unboundedly with the domain size. On the other hand, we present settings where we achieve almost matching upper bounds on the sum of the sizes of the two samples. Moreover, the (necessarily large) samples can be mostly unlabeled (target) samples, which are often much cheaper to obtain than labels. The size of the labeled (source) sample shrinks back to standard dependence on the VC-dimension of the concept class. This implies that unlabeled target-generated data is provably beneficial for DA learning.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700