Contemporary text-to-image models exhibit a surprising degree of mode collapse, as can be seen when sampling several images given the same text prompt. While previous work has attempted to address this issue by steering the model using guidance mechanisms, or by generating a large pool of candidates and refining them, in this work we take a different direction and aim for diversity in generations via noise optimization. Specifically, we show that a simple noise optimization objective can mitigate mode collapse while preserving the fidelity of the base model. We also analyze the frequency characteristics of the noise and show that alternative noise initializations with different frequency profiles can improve both optimization and search. Our experiments demonstrate that noise optimization yields superior results in terms of generation quality and variety.
Given a fixed text prompt and diffusion model, we optimize the noise initialization to increase visual diversity. Starting from i.i.d. noise samples, we generate a set of images. Using diversity and quality objectives (e.g. DINO dissimilarity, HPSv2), we update the noise to produce output images that capture more diversity per text prompt.
Results generated with Flux.1 [schnell]
@article{harrington2025noisediv,
author = {Harrington, Anne and Koepke, A. Sophia and Karthik, Shyamgopal and Darrell, Trevor and Efros, Alexei A.},
title = {It's Never Too Late: Noise Optimization for Collapse Recovery in Trained Diffusion Models},
journal = {arXiv preprint arXiv:2601.00090},
year = {2025}
}