|
patch_loss = np.sum( |
|
[self.MSE_loss(self.gram_matrix(self.get_patch(ly)), self.gram_matrix(self.get_patch(lp))) |
|
for ly, lp in zip(y_vgg, details_outputs_vgg)]) |
Why on earth there is a numpy function during the forward pass of computing loss? Switch to pytorch and get rid of for-loop.
Deep-Halftoning/utils/losses.py
Lines 136 to 138 in f968d28
Why on earth there is a numpy function during the forward pass of computing loss? Switch to pytorch and get rid of for-loop.