Skip to content

Conversation

@SourceryAI
Copy link

Thanks for starring sourcery-ai/sourcery ✨ 🌟 ✨

Here's your pull request refactoring your most popular Python repo.

If you want Sourcery to refactor all your Python repos and incoming pull requests install our bot.

Review changes via command line

To manually merge these changes, make sure you're on the master branch, then run:

git fetch https://github.com/sourcery-ai-bot/PyTorch-GAN master
git merge --ff-only FETCH_HEAD
git reset HEAD^

Copy link
Author

@SourceryAI SourceryAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to GitHub API limits, only the first 60 comments can be shown.

img_shape = (opt.channels, opt.img_size, opt.img_size)

cuda = True if torch.cuda.is_available() else False
cuda = bool(torch.cuda.is_available())
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 36-36 refactored with the following changes:

sampled_z = Variable(Tensor(np.random.normal(0, 1, (mu.size(0), opt.latent_dim))))
z = sampled_z * std + mu
return z
return sampled_z * std + mu
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function reparameterization refactored with the following changes:

Comment on lines -66 to +65
z = reparameterization(mu, logvar)
return z
return reparameterization(mu, logvar)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Encoder.forward refactored with the following changes:

Comment on lines -86 to +84
img = img_flat.view(img_flat.shape[0], *img_shape)
return img
return img_flat.view(img_flat.shape[0], *img_shape)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Decoder.forward refactored with the following changes:

Comment on lines -104 to +101
validity = self.model(z)
return validity
return self.model(z)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Discriminator.forward refactored with the following changes:

print(opt)

cuda = True if torch.cuda.is_available() else False
cuda = bool(torch.cuda.is_available())
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 39-39 refactored with the following changes:

x = self.model(x)
out = torch.cat((x, skip_input), 1)
return out
return torch.cat((x, skip_input), 1)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function UNetUp.forward refactored with the following changes:

img_shape = (opt.channels, opt.img_size, opt.img_size)

cuda = True if torch.cuda.is_available() else False
cuda = bool(torch.cuda.is_available())
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 36-36 refactored with the following changes:

d_in = torch.cat((img.view(img.size(0), -1), self.label_embedding(labels)), -1)
validity = self.model(d_in)
return validity
return self.model(d_in)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Discriminator.forward refactored with the following changes:

Comment on lines -108 to 116
for m in net.modules():
if isinstance(m, nn.Conv2d):
m.weight.data.normal_(0, 0.02)
m.bias.data.zero_()
elif isinstance(m, nn.ConvTranspose2d):
m.weight.data.normal_(0, 0.02)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d, nn.Linear)):
m.weight.data.normal_(0, 0.02)
m.bias.data.zero_()
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function initialize_weights refactored with the following changes:

Comment on lines -295 to +289
# Get output
validity = self.model(img)
return validity
return self.model(img)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Discriminator_CNN.forward refactored with the following changes:

This removes the following comments ( why? ):

# Get output

Comment on lines -326 to +318
cuda = True if torch.cuda.is_available() else False
cuda = bool(torch.cuda.is_available())
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 326-562 refactored with the following changes:

img_shape = (opt.channels, opt.img_size, opt.img_size)

cuda = True if torch.cuda.is_available() else False
cuda = bool(torch.cuda.is_available())
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 39-39 refactored with the following changes:

if e.errno == errno.EEXIST:
pass
else:
if e.errno != errno.EEXIST:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MNISTM.download refactored with the following changes:

2. Save the folder 'img_align_celeba' to '../../data/'
4. Run the sript using command 'python3 context_encoder.py'
"""

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 49-49 refactored with the following changes:

img_shape = (opt.channels, opt.img_size, opt.img_size)

cuda = True if torch.cuda.is_available() else False
cuda = bool(torch.cuda.is_available())
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 35-35 refactored with the following changes:

out = out.view(out.shape[0], 128, self.init_size, self.init_size)
img = self.conv_blocks(out)
return img
return self.conv_blocks(out)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Generator.forward refactored with the following changes:

Comment on lines -147 to +146
loss_pt = (torch.sum(similarity) - batch_size) / (batch_size * (batch_size - 1))
return loss_pt
return (torch.sum(similarity) - batch_size) / (batch_size * (batch_size - 1))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function pullaway_loss refactored with the following changes:

Comment on lines -106 to +109
layers = []
layers.append(nn.Conv2d(in_filters, out_filters, kernel_size=3, stride=1, padding=1))
layers = [
nn.Conv2d(in_filters, out_filters, kernel_size=3, stride=1, padding=1)
]

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Discriminator.__init__.discriminator_block refactored with the following changes:

img_shape = (opt.channels, opt.img_size, opt.img_size)

cuda = True if torch.cuda.is_available() else False
cuda = bool(torch.cuda.is_available())
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 35-35 refactored with the following changes:

validity = self.model(img_flat)

return validity
return self.model(img_flat)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Discriminator.forward refactored with the following changes:

print(opt)

cuda = True if torch.cuda.is_available() else False
cuda = bool(torch.cuda.is_available())
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 38-38 refactored with the following changes:

out = out.view(out.shape[0], 128, self.init_size, self.init_size)
img = self.conv_blocks(out)
return img
return self.conv_blocks(out)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Generator.forward refactored with the following changes:

print(opt)

cuda = True if torch.cuda.is_available() else False
cuda = bool(torch.cuda.is_available())
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 33-33 refactored with the following changes:

out = out.view(out.shape[0], 128, self.init_size, self.init_size)
img = self.conv_blocks(out)
return img
return self.conv_blocks(out)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Generator.forward refactored with the following changes:

Comment on lines -115 to +111
validity = self.model(img)

return validity
return self.model(img)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Discriminator.forward refactored with the following changes:

Comment on lines -141 to +135
label = self.output_layer(feature_repr)
return label
return self.output_layer(feature_repr)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Classifier.forward refactored with the following changes:

out = out.view(out.shape[0], 128, self.init_size, self.init_size)
img = self.conv_blocks(out)
return img
return self.conv_blocks(out)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Generator.forward refactored with the following changes:

Comment on lines -89 to +88
validity = self.adv_layer(out)

return validity
return self.adv_layer(out)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Discriminator.forward refactored with the following changes:

print(opt)

cuda = True if torch.cuda.is_available() else False
cuda = bool(torch.cuda.is_available())
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 34-34 refactored with the following changes:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant