arXiv:2410.23530v3 Announce Type: replace-cross Abstract: Diffusion Models achieve state-of-the-art performance in generating new samples but lack a low-dimensional latent space that encodes the data into meaningful features. Inversion-based methods address this by reversing the denoising trajectory, mapping each image back to its approximated starting noise. In this work, we thoroughly analyze this procedure and focus on the relation between the initial Gaussian noise, the generated samples, and their corresponding latent encodings obtained through the DDIM inversion. First, we show that latents exhibit structural patterns in the form of less diverse noise predicted for smooth image regions. As a consequence of this divergence, we present that the space of image inversions is notably less manipulative than the original Gaussian noise. Next, we explain the origin of the phenomenon, demonstrating that, during the first inversion steps, the noise prediction error is much more significant for the plain areas than for the rest of the image. As a surprisingly simple solution, we propose to replace the first DDIM Inversion steps with a forward diffusion process, which successfully decorrelates latent encodings, leading to higher quality editions and interpolations. The code is available at https://github.com/luk-st/taba.