Next-generation radio interferometers, like the Square Kilometre Array, will acquire large amounts of data with the goal of improving the size and sensitivity of the reconstructed images by orders of magnitude. The efficient processing of large-scale data sets is of great importance. We propose an acceleration strategy for a recently proposed primal-dual distributed algorithm. A preconditioning approach can incorporate into the algorithmic structure both the sampling density of the measured visibilities and the noise statistics. Using the sampling density information greatly accelerates the convergence speed, especially for highly non-uniform sampling patterns, while relying on the correct noise statistics optimizes the sensitivity of the reconstruction. In connection to CLEAN, our approach can be seen as including in the same algorithmic structure both natural and uniform weighting, thereby simultaneously optimizing both the resolution and the sensitivity. The method relies on a new non-Euclidean proximity operator for the data fidelity term, that generalizes the projection on to the ℓ2 ball where the noise lives for naturally weighted data, to the projection on to a generalized ellipsoid incorporating sampling density information through uniform weighting. Importantly, this non-Euclidean modification is only an acceleration strategy to solve the convex imaging problem with data fidelity dictated only by noise statistics. We show through simulations with realistic sampling patterns the acceleration obtained using the preconditioning. We also investigate the algorithm performance for the reconstruction of the 3C129 radio galaxy from real visibilities and compare with multiscale CLEAN, showing better sensitivity and resolution. Our MATLAB code is available online on GitHub.