We study (uniform) exponential convergence of the a68b2e8" title="Click to view the MathML source">nth minimal worst-case error, which means that a66157de3876c77d7de422f7d8ce"> converges to zero exponentially fast with increasing a68b2e8" title="Click to view the MathML source">n. Furthermore, we consider how the error depends on the dimension s. To this end, we study the minimal number of information evaluations needed to compute an ε-approximation by considering several notions of tractability which are defined with respect to s and a6abc880d670cd8f7248" title="Click to view the MathML source">logε−1. We derive necessary and sufficient conditions on the sequences and for obtaining exponential error convergence, and also for obtaining the various notions of tractability. It turns out that the conditions on the weight sequences are almost the same as for the information class which uses all linear functionals. The results are also constructive as the considered algorithms are based on tensor products of Gauss–Hermite rules for multivariate integration. The obtained results are compared with the analogous results for integration in the same Hermite space. This allows us to give a new sufficient condition for EC-weak tractability for integration.