web-dev-qa-db-ja.com

RuntimeError:ディメンションが範囲外です([-1、0]の範囲であることが期待されますが、1を取得しました)

Pytorchモデル(初心者)をトレーニングしようとしていますが、画像を入力としてフィードしているunetモデルを使用しており、それに加えて、入力画像マスクとしてラベルをフィードして、その上でデータセットを処理しています。他の場所から取得したunetモデル、およびクロスエントロピー損失を損失関数として使用していますが、この次元が範囲外のエラーになります。

    `RuntimeError                              Traceback (most recent call last)
<ipython-input-358-fa0ef49a43ae> in <module>()
     16 for Epoch in range(0, num_epochs):
     17     # train for one Epoch
---> 18     curr_loss = train(train_loader, model, criterion, Epoch, num_epochs)
     19 
     20     # store best loss and save a model checkpoint

<ipython-input-356-1bd6c6c281fb> in train(train_loader, model, criterion, Epoch, num_epochs)
     16         # measure loss
     17         print (outputs.size(),labels.size())
---> 18         loss = criterion(outputs, labels)
     19         losses.update(loss.data[0], images.size(0))
     20 

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in     _ _call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

<ipython-input-355-db66abcdb074> in forward(self, logits, targets)
      9         probs_flat = probs.view(-1)
     10         targets_flat = targets.view(-1)
---> 11         return self.crossEntropy_loss(probs_flat, targets_flat)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in     __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
  --> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/loss.py in f orward(self, input, target)
    599         _assert_no_grad(target)
    600         return F.cross_entropy(input, target, self.weight, self.size_average,
--> 601                                self.ignore_index, self.reduce)
    602 
    603 

/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in     cross_entropy(input, target, weight, size_average, ignore_index, reduce)
   1138         >>> loss.backward()
   1139     """
-> 1140     return nll_loss(log_softmax(input, 1), target, weight, size_average, ignore_index, reduce)
   1141 
   1142 

/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in     log_softmax(input, dim, _stacklevel)
    784     if dim is None:
    785         dim = _get_softmax_dim('log_softmax', input.dim(),      _stacklevel)
--> 786     return torch._C._nn.log_softmax(input, dim)
    787 
    788 

RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)` 

私のコードの一部は次のようになります

`class crossEntropy(nn.Module):
def __init__(self, weight = None, size_average = True):
    super(crossEntropy, self).__init__()
    self.crossEntropy_loss = nn.CrossEntropyLoss(weight, size_average)

def forward(self, logits, targets):
    probs = F.sigmoid(logits)
    probs_flat = probs.view(-1)
    targets_flat = targets.view(-1)
    return self.crossEntropy_loss(probs_flat, targets_flat)`




`class UNet(nn.Module):
def __init__(self, imsize):
    super(UNet, self).__init__()
    self.imsize = imsize

    self.activation = F.relu

    self.pool1 = nn.MaxPool2d(2)
    self.pool2 = nn.MaxPool2d(2)
    self.pool3 = nn.MaxPool2d(2)
    self.pool4 = nn.MaxPool2d(2)
    self.conv_block1_64 = UNetConvBlock(4, 64)
    self.conv_block64_128 = UNetConvBlock(64, 128)
    self.conv_block128_256 = UNetConvBlock(128, 256)
    self.conv_block256_512 = UNetConvBlock(256, 512)
    self.conv_block512_1024 = UNetConvBlock(512, 1024)

    self.up_block1024_512 = UNetUpBlock(1024, 512)
    self.up_block512_256 = UNetUpBlock(512, 256)
    self.up_block256_128 = UNetUpBlock(256, 128)
    self.up_block128_64 = UNetUpBlock(128, 64)

    self.last = nn.Conv2d(64, 2, 1)


def forward(self, x):
    block1 = self.conv_block1_64(x)
    pool1 = self.pool1(block1)

    block2 = self.conv_block64_128(pool1)
    pool2 = self.pool2(block2)

    block3 = self.conv_block128_256(pool2)
    pool3 = self.pool3(block3)

    block4 = self.conv_block256_512(pool3)
    pool4 = self.pool4(block4)

    block5 = self.conv_block512_1024(pool4)

    up1 = self.up_block1024_512(block5, block4)

    up2 = self.up_block512_256(up1, block3)

    up3 = self.up_block256_128(up2, block2)

    up4 = self.up_block128_64(up3, block1)

    return F.log_softmax(self.last(up4))`

任意の提案、ヒントは本当に役に立ちます

前もって感謝します。さらにコードが必要な場合はお知らせください。

5
Ryan

あなたのコードによると:

probs_flat = probs.view(-1)
targets_flat = targets.view(-1)
return self.crossEntropy_loss(probs_flat, targets_flat)

2つの1dテンソルをnn.CrossEntropyLossに与えていますが、 documentation によると、次のことを期待しています:

Input: (N,C) where C = number of classes
Target: (N) where each value is 0 <= targets[i] <= C-1
Output: scalar. If reduce is False, then (N) instead.

それがあなたが直面している問題の原因だと思います。

3
Wasi Ahmad