Tensorflow(Flask+Python):如何将符号张量转换为ndarray

2024-04-20 14:02:28 发布

您现在位置:Python中文网/ 问答频道 /正文

初学者到tensorflow(使用1.15.x版本和flask

我已经构建了我的object detector(使用TensorFlow中的对象检测API,并使用所有ckpoint文件局部提取inference_graph

现在我想开始一个flask API,在这里我想使用request.files.getlist函数,并在main脚本中运行推断(该过程类似于this project,主脚本在app.py

我的方法和链接方法之间的一个区别是,我没有使用yolo,我试图在main函数中写下所有必要的变量。 这是我的密码:

#list of imported packages (..)

# customize your API through the following parameters
MODEL_NAME = './inference_graph'  # directory within the frozen graph (obj detector) inside

# Path to frozen detection graph .pb file, which contains the model that is used for object detection.
PATH_TO_CKPT = os.path.join(MODEL_NAME,'frozen_inference_graph.pb')
# Path to label map file
PATH_TO_LABELS = './training/labelmap.pbtxt'
# Path to test image folder (here i upload a test set folder into object det folder)
PATH_TEST_IMAGE = './Test_Folder_Inference'  # sample folder to test (with 4 images) the API
# Number of classes the object detector can identify
NUM_CLASSES = 1

# load LABEL MAP vars
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)

# load the TF model into "memory"
detection_graph = tf.Graph()
with detection_graph.as_default():
    od_graph_def = tf.GraphDef()
    with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:  # 'rb' =read binary
        serialized_graph = fid.read()
        od_graph_def.ParseFromString(serialized_graph)
        tf.import_graph_def(od_graph_def, name='')

inf_sess = tf.Session(
    graph=detection_graph)  # initialize the var for the "session" of the graph (session runs the graph operations)

# Define input and output Tensors (variables) for the graph (detection_graph)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
# So output tensors are the detection boxes, scores and classes
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
# Number of objects detected
num_detections = detection_graph.get_tensor_by_name('num_detections:0')

# Initialize Flask application
app = Flask(__name__)


# API that returns JSON with classes found in images
@app.route('/detections', methods=['POST'])  # app route/endpoint + spec the method [POST in this case]
def get_detections():  # define the function
    raw_images = []  # create a list to store/append the req. images
    images = request.files.getlist('images')  ##request fx from Flask
    image_names = []
    print(len(images))  # just a check to see if images are 'processed' within the request.files.getlist
    for image in images:
        image_name = image.filename
        image_names.append(image_name)
        image.save(os.path.join(PATH_TEST_IMAGE, image_name))
        img_raw = tf.image.decode_image(
            open(image_name, 'rb').read(), channels=3)  # decoding of file/img
        raw_images.append(img_raw)  # append (final list)

    num = 0

    # create list for final response
    response = []

    for j in range(len(image_names)):  # potrei inserire una print sulla len (per vedere se è 'nulla'..)
        # create list of responses for current image

        raw_img = raw_images[j] #here, every single raw_img is a "tensor"
        num += 1

        image_expanded = np.expand_dims(raw_img, axis=0) #expand the batch dim/shape (but the
        # Perform the actual detection by running the model with the image_exp as input (boxes can be deleted/unused var)
        (boxes, scores, classes, num) = inf_sess.run(
            [detection_boxes, detection_scores, detection_classes, num_detections],
            feed_dict={image_tensor: image_expanded})

##and then the script continues (but the only issue is up here)

当我运行此脚本(通过命令提示符使用curl)时,它返回以下错误:

Cannot convert a symbolic Tensor (decode_image/cond_jpeg/Merge:0) to a numpy array

我尝试force/convert作为np.arrayimage_expanded但不起作用(尝试了一些组合,但我总是得到类似的错误

如何将此静态tensor转换为ndarray类型


Tags: ofthetonameimagemapforraw
1条回答
网友
1楼 · 发布于 2024-04-20 14:02:28

我已经找到了解决这个问题的办法(就我而言)

基本上,我改变了如何读取图像的模式,使用cv2.imread()而不是tf.image.decode_image(),然后扩展图像,使其具有“四维”(形状=(1,高度,宽度,3))

for image in images:
        image_name = image.filename #takes the filename
        image_names.append(image_name) #append

        ####new lines added/changed#####
        image_cv = cv2.imread(os.path.join(PATH_TEST_IMAGE, image_name))

        image_cv = cv2.cvtColor(image_cv, cv2.COLOR_BGR2RGB)

        image_expanded = np.expand_dims(image_cv, axis=0)

相关问题 更多 >