我正在尝试有效地实现以下类型的conv2d层。我认为当前的实现是可行的,但效率非常低。你知道吗
输入大小张量
(批量×宽×高×厚)
输出张量
(批量×宽×高×粗)
该层采用两个参数,单位数(C_)和K conv内核列表(提前知道)。每个conv内核的大小是(W,H,1,N),其中N是输出通道数(in通道为1)。注意,同一个like中的不同内核有不同的Ns!你知道吗
首先,我们应用一个紧密连接的层(可训练的)变换输入形状
(批量大小x宽x高x厚)
然后,我想将每个卷积核应用到每个通道。你知道吗
这将导致C\u x K x(批量大小x W x H x N)
然后,我想沿着N取一个最大值(因此得到(batch_size x W x H x 1))并连接所有要得到的值
(批量×宽×高×厚×厚)
(所以Cïout=Cïu x K)
这里有一种方法可以实现这一点,但是训练时间非常慢,而且这在GPU上不起作用:
import tensorflow as tf
from tensorflow.keras import layers
class fixedConvLayer(layers.Dense):
def __init__(self, units, conv_kernels, **params):
params['units']=units
self.conv_kernels_numpy = conv_kernels
super().__init__(**params)
return
def build(self, input_shape):
super().build(input_shape)
self.conv_kernels = [tf.convert_to_tensor(np.reshape(kernels,[3,3,1,-1]))
for kernels in self.conv_kernels_numpy]
return
def comp_filters(self,channel):
return tf.concat([
tf.math.reduce_max(tf.nn.conv2d(channel,
filter=kernel,
strides=1,
padding='SAME'),axis=3,keepdims=True)
for kernel in self.conv_kernels],axis=3)
def call(self,inputs):
#take from Dense definition and slightly modify
inputs = tf.convert_to_tensor(inputs)
rank = tf.rank(inputs)
if rank != 4:
assert 'Rank expected to be 4'
# Broadcasting is required for the inputs.
outputs = tf.tensordot(inputs, self.kernel, [[3], [0]])
# Reshape the output back to the original ndim of the input.
shape = inputs.shape.as_list()
output_shape = shape[:-1] + [self.units]
outputs.set_shape(output_shape)
if self.use_bias:
outputs = tf.nn.bias_add(outputs, self.bias)
if self.activation is not None:
outputs = self.activation(outputs)
#apply the conv filters
channel_list = tf.split(outputs,num_or_size_splits= self.units,axis = -1)
max_layers = tf.concat([self.comp_filters(channel) for channel in channel_list],axis=3)
return max_layers
目前没有回答
相关问题 更多 >
编程相关推荐