Locally optimal proposal variances are introduced for RWM-within-Gibbs algorithms. These locally optimal tunings are shown to theoretically outperform constant ones. Similar state-dependent step sizes are discussed for MALA-within-Gibbs samplers. MALA-within-Gibbs constitutes an efficient, yet computationally affordable option. Efficiency of local tunings depends on the variability in the hierarchical target.