GANPaint Studio

Semantic Photo Manipulation with a Generative Image Prior

David Bau, Hendrik Strobelt, William Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, Antonio Torralba

GANPaint Studio is a starting point to show how creative tools in the future could work. The tool takes a natural image of a specific category, e.g. churches or kitchen, and allows modifications with brushes that do not just draw simple strokes, but actually draw semantically meaningful units – such as trees, brick-texture, or domes. This is a joined project by researchers from MIT CSAIL, IBM Research, and the MIT-IBM Watson AI Lab. Enjoy playing with it.


Try the demo Paper (SIGGRAPH 2019)

How does it work?

The core part of GANPaint Studio is a neural network (GAN) that can produce its own images of a certain category, e.g. kitchen images. In previous work, we analyzed which internal parts of the network are responsible for producing which feature (project GANDissect). This allowed us to modify images that the network produced by "drawing" neurons.

The novelty we added for GANPaint Studio is that a natural image (of this category) can now be ingested and modified with semantic brushes that produce or remove units such as trees, brick-texture, or domes. The demo is currently in low resolution and not perfect, but it shows that something like this is possible.

Semantic Photo Manipulation with a Generative Image Prior

(to appear at SIGGRAPH 2019)

David Bau, Hendrik Strobelt, William Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, Antonio Torralba.


Despite the recent success of GANs in synthesizing images conditioned on inputs such as a user sketch, text, or semantic labels, manipulating the high- level attributes of an existing natural photograph with GANs is challenging for two reasons. First, it is hard for GANs to precisely reproduce an input image. Second, after manipulation, the newly synthesized pixels often do not fit the original image. In this paper, we address these issues by adapt- ing the image prior learned by GANs to image statistics of an individual image. Our method can accurately reconstruct the input image and synthesize new content, consistent with the appearance of the input image. We demonstrate our interactive system on several semantic image editing tasks, including synthesizing new objects consistent with background, removing unwanted objects, and changing the appearance of an object. Quantitative and qualitative comparisons against several existing methods demonstrate the effectiveness of our method.



To perform a semantic edit on an image x, we take three steps. (1) We first compute a latent vector z = E (x) representing x. (2) We then apply a semantic vector space operation ze = edit(z) in the latent space; this could add, remove, or alter a semantic concept in the image. (3) Finally, we regenerate the image from the modified ze . Unfortunately, as can be seen in (b), usually the input image x cannot be precisely generated by the generator G , so (c) using the generator G to create the edited image G(xe) will result in the loss of many attributes and details of the original image (a). Therefore to generate the image we propose a new last step: (d) We learn an image-specific generator G′ which can produce x′e = G′(ze) that is faithful to the original image x in the unedited regions. Photo from the LSUN dataset.



author = {David Bau and Hendrik Strobelt and William Peebles and
          Jonas Wulff and Bolei Zhou and Jun{-}Yan Zhu
          and Antonio Torralba},
title = {Semantic Photo Manipulation with a Generative Image Prior},
journal = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH)},
volume = {38},
number = {4},
year = {2019},


Twitter: please use #ganpaint
Email: contact (at) ganpaint (dot) io