We consider inference about a scalar coefficient in a linear regression model. One previously considered approach to dealing with many controls imposes sparsity, that is, it is assumed known that nearly all control coefficients are zero, or at least very nearly so. We instead impose a bound on the quadratic mean of the controls’ effect on the dependent variable. We develop a simple inference procedure that exploits this additional information in general heteroskedastic models. We study its asymptotic efficiency properties and compare it to a sparsity-based approach in a Monte Carlo study. The method is illustrated in three empirical applications.