Hi, thanks for sharing the nice work and code!
I am running inference on the provided example images using the pretrained Places2 model. However, the generated images I get have pretty inconsistent pixel contrast inside and outside the holes, which are very different from the demo (same image) in the paper. I also tried many other images, and they have the same issues. I am wondering if you think I have running into some bugs when I run the model or do you have any other post-processing to smooth out the boundary? I haven't changed anything in the code yet before running the inference.
Looking forward to hearing from you. Thanks a lot!

