<h3>简短回答</h3>
<p>我认为最简单的方法是离线采样(即在TensorFlow图之外)
为一批对及其标签(正或负,即同一类或不同类)创建<code>tf.placeholder</code>,然后在TensorFlow中计算相应的损失。</p>
<hr/>
<h3>代码</h3>
<ol>
<li>你可以离线取样。您采样<code>batch_size</code>对输入,并输出<code>batch_size</code>对形状<code>[batch_size, input_size]</code>的左元素。您还输出形状<code>[batch_size,]</code>的成对(正或负)的标签</li>
</ol>
<pre class="lang-py prettyprint-override"><code>pairs_left = np.zeros((batch_size, input_size))
pairs_right = np.zeros((batch_size, input_size))
labels = np.zeros((batch_size, 1)) # ex: [[0.], [1.], [1.], [0.]] for batch_size=4
</code></pre>
<ol start=“2”>
<li>然后创建与这些输入对应的Tensorflow占位符。在您的代码中,您将在<code>sess.run()</code>的<code>feed_dict</code>参数中将先前的输入提供给这些占位符</li>
</ol>
<pre class="lang-py prettyprint-override"><code>pairs_left_node = tf.placeholder(tf.float32, [batch_size, input_size])
pairs_right_node = tf.placeholder(tf.float32, [batch_size, input_size])
labels_node = tf.placeholder(tf.float32, [batch_size, 1])
</code></pre>
<ol start=“3”>
<li>现在我们可以对输入执行前馈(假设您的模型是线性模型)。</li>
</ol>
<pre class="lang-py prettyprint-override"><code>W = ... # shape [input_size, feature_size]
output_left = tf.matmul(pairs_left_node, W) # shape [batch_size, feature_size]
output_right = tf.matmul(pairs_right_node, W) # shape [batch_size, feature_size]
</code></pre>
<ol start=“4”>
<li>最后我们可以计算成对损失。
<a href="https://i.stack.imgur.com/hz4O4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hz4O4.png" alt="Loss"/></a></li>
</ol>
<pre class="lang-py prettyprint-override"><code>l2_loss_pairs = tf.reduce_sum(tf.square(output_left - output_right), 1)
positive_loss = l2_loss_pairs
negative_loss = tf.nn.relu(margin - l2_loss_pairs)
final_loss = tf.mul(labels_node, positive_loss) + tf.mul(1. - labels_node, negative_loss)
</code></pre>
<hr/>
<p>就这样!现在可以通过良好的脱机采样来优化此丢失。</p>