如何使用Dataflow跳过apache beam中io级别的错误元素?

2024-04-19 02:21:39 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在对存储在GCP中的tfrecords进行一些分析,但文件中的一些tfrecords已损坏,因此,当我运行管道并收到四个以上错误时,由于this,管道中断。我认为这是DataFlowRunner的约束,而不是beam的约束

这是我的处理脚本

import argparse
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.metrics.metric import Metrics

from apache_beam.runners.direct import direct_runner
import tensorflow as tf

input_ = "path_to_bucket"


def _parse_example(serialized_example):
  """Return inputs and targets Tensors from a serialized tf.Example."""
  data_fields = {
      "inputs": tf.io.VarLenFeature(tf.int64),
      "targets": tf.io.VarLenFeature(tf.int64)
  }
  parsed = tf.io.parse_single_example(serialized_example, data_fields)
  inputs = tf.sparse.to_dense(parsed["inputs"])
  targets = tf.sparse.to_dense(parsed["targets"])
  return inputs, targets


class MyFnDo(beam.DoFn):

  def __init__(self):
    beam.DoFn.__init__(self)
    self.input_tokens = Metrics.distribution(self.__class__, 'input_tokens')
    self.output_tokens = Metrics.distribution(self.__class__, 'output_tokens')
    self.num_examples = Metrics.counter(self.__class__, 'num_examples')
    self.decode_errors = Metrics.counter(self.__class__, 'decode_errors')

  def process(self, element):
    # inputs = element.features.feature['inputs'].int64_list.value
    # outputs = element.features.feature['outputs'].int64_list.value
    try:
      inputs, outputs = _parse_example(element)
      self.input_tokens.update(len(inputs))
      self.output_tokens.update(len(outputs))
      self.num_examples.inc()
    except Exception:
      self.decode_errors.inc()



def main(argv):
  parser = argparse.ArgumentParser()
  parser.add_argument('--input', dest='input', default=input_, help='input tfrecords')
  # parser.add_argument('--output', dest='output', default='gs://', help='output file')

  known_args, pipeline_args = parser.parse_known_args(argv)
  pipeline_options = PipelineOptions(pipeline_args)

  with beam.Pipeline(options=pipeline_options) as p:
    tfrecords = p | "Read TFRecords" >> beam.io.ReadFromTFRecord(known_args.input,
                                                                 coder=beam.coders.ProtoCoder(tf.train.Example))
    tfrecords | "count mean" >> beam.ParDo(MyFnDo())


if __name__ == '__main__':
    main(None)

因此,基本上我如何在分析时跳过损坏的TFR记录并记录它们的编号


Tags: importselfinputoutputpipelineexampletftfrecords
1条回答
网友
1楼 · 发布于 2024-04-19 02:21:39

它有一个概念上的问题,beam.io.ReadFromTFRecord读取单个tfrecords(可以共享到多个文件),而我给出了许多单个tfrecords的列表,因此它导致了错误。从ReadFromTFRecord切换到ReadAllFromTFRecord解决了我的问题

p = beam.Pipeline(runner=direct_runner.DirectRunner())
tfrecords = p | beam.Create(tf.io.gfile.glob(input_)) | ReadAllFromTFRecord(coder=beam.coders.ProtoCoder(tf.train.Example))
tfrecords | beam.ParDo(MyFnDo())

相关问题 更多 >