默认情况下,conv1D执行二维卷积

2024-05-16 10:34:47 发布

您现在位置:Python中文网/ 问答频道 /正文

因此,大多数CNN指南将一维卷积解释为一系列1D核与输入序列卷积(类似于传统的FIR滤波器)。然而,据我所知,conv1d的默认值在每个输出的所有通道上实现卷积(基本上是2D卷积)。 如果需要传统的FIR滤波器实现,则应在信道中指定groups=组

检查重量似乎验证了这一点:

from torch import nn

C1 = nn.Conv1d(in_channels=3, out_channels=6, kernel_size=7)
C2 = nn.Conv1d(in_channels=3, out_channels=6, kernel_size=7,groups=3)
C3 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=7)
C4 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=7, groups=3)

print(C1.weight.shape, '<-- 6 filters which convolve across two dimensions')
print(C2.weight.shape, '<-- 6 filters which convolve across one dimensions')
print(C3.weight.shape, '<-- 6 filters which convolve across three dimensions')
print(C4.weight.shape, '<-- 6 filters which convolve across two dimensions')

提供以下输出:

torch.Size([6, 3, 7]) <-- 6 filters which convolve across two dimensions
torch.Size([6, 1, 7]) <-- 6 filters which convolve across one dimensions
torch.Size([6, 3, 7, 7]) <-- 6 filters which convolve across three dimensions
torch.Size([6, 1, 7, 7]) <-- 6 filters which convolve across two dimensions

我的看法错了吗

如果a是正确的,我相信conv1d的命名相当混乱,因为它意味着1d卷积


Tags: inwhichsizenntorchoutkernel卷积
1条回答
网友
1楼 · 发布于 2024-05-16 10:34:47

需要考虑的几件事:

1)Conv1d使用一维(向量)核运行卷积。C1和C2的内核大小为(7)

2)Conv2d使用2维核(矩阵)运行卷积。C3和C4的内核大小为(7,7)

3)分组是控制输入和输出通道之间连接的一种方法,如果可以同时产生多个同时卷积

更多信息here

相关问题 更多 >