We study (uniform) exponential convergence of the nth minimal worst-case error, which means that converges to zero exponentially fast with increasing n. Furthermore, we consider how the error depends on the dimension s. To this end, we study the minimal number of information evaluations needed to compute an ε-approximation by considering several notions of tractability which are defined with respect to s and logε−1. We derive necessary and sufficient conditions on the sequences and 20307ae49006f20d91a2cc4c"> for obtaining exponential error convergence, and also for obtaining the various notions of tractability. It turns out that the conditions on the weight sequences are almost the same as for the information class which uses all linear functionals. The results are also constructive as the considered algorithms are based on tensor products of Gauss–Hermite rules for multivariate integration. The obtained results are compared with the analogous results for integration in the same Hermite space. This allows us to give a new sufficient condition for EC-weak tractability for integration.