如何确保我的代码使用GPU的整个容量?

2024-04-26 00:35:20 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在一个大型数据库上训练ResNet-50网络。当检查我的GPU的使用率时,我发现它的使用率在0%到4%之间变化!虽然我使用的是tensorflow GPU。 这是我的CPU和GPU使用率: enter image description here

当我运行这两个命令行时:

 from tensorflow.python.client import device_lib
 print(device_lib.list_local_devices())

我明白了

 [name: "/device:CPU:0"
 device_type: "CPU"
 memory_limit: 268435456
 locality {
  }
 incarnation: 4622338339054789933
 , name: "/device:GPU:0"
 device_type: "GPU"
 memory_limit: 13594420839
 locality {
 bus_id: 1
 links {
 }
 }
 incarnation: 17927686236275886371
 physical_device_desc: "device: 0, name: Quadro 
 P5000, pci bus id: 0000:01:00.0, compute 
 capability: 6.1"
 ]

当我跑的时候 英伟达smi 我明白了 enter image description here

有谁能帮我一个简单的解释如何正确和完全利用我的GPU? 我不得不提到,我在培训期间使用ImageDataGenerator对象,它有两种方法flow\u from\u directory和fit\u generator,因此我可以设置特定的参数,例如workers参数来提高我的GPU使用率。 下面是我如何使用ImageDataGenerator

  input_imgen = ImageDataGenerator()
  train_it = input_imgen.flow_from_directory(directory=data_path_l,target_size= 
  (224,224),
                                      color_mode="rgb",
                                      batch_size=batch_size,
                                      class_mode="categorical",
                                      shuffle=False,
                                      )

  valid_it = input_imgen.flow_from_directory(directory=test_data_path_l,target_size= 
  (224,224),
                                      color_mode="rgb",
                                      batch_size=batch_size,
                                      class_mode="categorical",
                                      shuffle=False,
                                      )

  model = resnet.ResnetBuilder.build_resnet_50((img_channels, img_rows, 
  img_cols), num_classes)
  model.compile(loss='categorical_crossentropy',
          optimizer='adam',
          metrics=['accuracy'])

  filepath=".\conv2D_models\weights-improvement-{epoch:02d}- 
  {val_acc:.2f}.hdf5"

  mc = ModelCheckpoint(filepath, save_weights_only=False, verbose=1, 
  monitor='loss', mode='min')

  history=model.fit_generator(train_it,
                    steps_per_epoch= train_images // batch_size,
                    validation_data = valid_it, 
                    validation_steps = val_images// batch_size,
                    epochs = epochs,callbacks=[mc],
                    shuffle=False)

Tags: namefromfalseinputsizegpumodedevice