Tensorisation of vectors and their efficient convolution
详细信息    查看全文
  • 作者:Wolfgang Hackbusch (1) wh@mis.mpg.de
  • 关键词:Mathematics Subject Classification (2000) 15A69 – ; 15A99 – ; 44A35 ; 65F99 ; 65T99
  • 刊名:Numerische Mathematik
  • 出版年:2011
  • 出版时间:November 2011
  • 年:2011
  • 卷:119
  • 期:3
  • 页码:465-488
  • 全文大小:299.4 KB
  • 参考文献:1. Braess D., Hackbusch W.: On the efficient computation of high-dimensional integrals and the approximation by exponential sums. In: DeVore, R., Kunoth, A. (eds) Multiscale, nonlinear and adaptive approximation, pp. 39–74. Springer, Berlin (2009)
    2. Espig, M.: Effiziente Bestapproximation mittels Summen von Elementartensoren in hohen Dimensionen. Doctoral thesis, University Leipzig (2008)
    3. Grasedyck, L.: Polynomial approximation in hierarchical Tucker format by vector-tensorization. Submitted (2010)
    4. Hackbusch W.: Convolution of hp-functions on locally refined grids. IMA J. Numer. Anal. 29, 960–985 (2009)
    5. Hackbusch, W.: Tensor spaces and numerical tensor calculus. Monograph (in preparation)
    6. Hackbusch W., K眉hn S.: A new scheme for the tensor representation. J. Fourier Anal. Appl. 15, 706–722 (2009)
    7. Khoromskij, B.N.: O(d log N)-quantics approximation of N-d tensors in high-dimensional numerical modeling. Constr. Approx (2011). doi:10.1007/s00365-011-9131-1
    8. Oseledets I.V.: Approximation of 2 d 脳 2 d matrices using tensor decomposition. SIAM J. Matrix Anal. Appl. 31, 2130–2145 (2010)
    9. Oseledets I.V., Tyrtyshnikov E.E.: Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 31, 3744–3759 (2009)
  • 作者单位:1. Max-Planck-Institut, Mathematik in den Naturwissenschaften, Inselstr. 22, 04103 Leipzig, Germany
  • 刊物类别:Mathematics and Statistics
  • 刊物主题:Mathematics
    Numerical Analysis
    Mathematics
    Mathematical and Computational Physics
    Mathematical Methods in Physics
    Numerical and Computational Methods
    Applied Mathematics and Computational Methods of Engineering
  • 出版者:Springer Berlin / Heidelberg
  • ISSN:0945-3245
文摘
In recent papers the tensorisation of vectors has been discussed. In principle, this is the isomorphic representation of an \mathbbRn{\mathbb{R}^{n}} vector as a tensor. Black-box tensor approximation methods can be used to reduce the data size of the tensor representation. In particular, if the vector corresponds to a grid function, the resulting data size can become much smaller than n, e.g., O(logn) << n{O(\log n)\ll n} . In this article we discuss the convolution of two vectors which are given via a sparse tensor representation. We want to obtain the result again in the tensor representation. Furthermore, the cost of the convolution algorithm should be related to the operands’ data sizes. While \mathbbRn{\mathbb{R}^{n}} vectors can be considered as grid values of function, we also apply the corresponding procedure to univariate functions.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700