Imposing a combination of sparsity and weak sparsity on the parameters of the model we first establish an oracle inequality for the Lasso. This is valid even when the error terms are heteroskedastic and no structure is imposed on the time series dependence of the error terms.
Next, we provide upper bounds on the sup-norm estimation error of the Lasso. As opposed to the classical ℓ1- and ℓ2-bounds the sup-norm bounds do not directly depend on the unknown degree of sparsity and are thus well suited for thresholding the Lasso for variable selection. We provide sufficient conditions under which thresholding results in consistent model selection. Pointwise valid asymptotic inference is established for a post-thresholding estimator. Finally, we show how the Lasso can be desparsified in the correlated random effects setting and how this leads to uniformly valid inference even in the presence of heteroskedasticity and autocorrelated error terms.