Hi,
I was able to reproduce the results on Brats2018 data. However, I was training the model on another dataset with different input size (20x256x256 or 32x256x256) but getting error on this part:
# Decoder
d_base_feat_size = [4, 5, 6]
# PatchExpand Output
torch.Size([1, 544, 1280]) # x.shape
8 10 12 1 544 1280 # print(D, H, W, B, L, C)
---------------------------------------------
x = x.view(B, D, H, W, C)
RuntimeError: shape '[1, 8, 10, 12, 1280]' is invalid for input of size 696320
How do you compute d_base_feat_size for an input size of (20x256x256 or 32x256x256)?
Hi,
I was able to reproduce the results on Brats2018 data. However, I was training the model on another dataset with different input size (20x256x256 or 32x256x256) but getting error on this part:
How do you compute d_base_feat_size for an input size of
(20x256x256 or 32x256x256)?