文摘
In recent papers the tensorisation of vectors has been discussed. In principle, this is the isomorphic representation of an \mathbbRn{\mathbb{R}^{n}} vector as a tensor. Black-box tensor approximation methods can be used to reduce the data size of the tensor representation. In particular, if the vector corresponds to a grid function, the resulting data size can become much smaller than n, e.g., O(logn) << n{O(\log n)\ll n} . In this article we discuss the convolution of two vectors which are given via a sparse tensor representation. We want to obtain the result again in the tensor representation. Furthermore, the cost of the convolution algorithm should be related to the operands’ data sizes. While \mathbbRn{\mathbb{R}^{n}} vectors can be considered as grid values of function, we also apply the corresponding procedure to univariate functions.