哪個當太多的工人打錯了時的失敗的應用

2024-06-06 13:03:04 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在EMR集群上运行一个使用Dask-YARN(0.6.0)的Dask(1.2)应用程序。今天我遇到了这样一个情况:我的员工失败了(由于HDFS错误),而skein.ApplicationMaster应用程序会不断创造新的工人。 如果有太多工人失败了,有没有办法指示达斯克纱线取消申请?你知道吗

具体来说,我的应用程序主日志如下所示:

19/06/21 16:00:27 INFO skein.ApplicationMaster: RESTARTING: adding new container to replace dask.worker_805.
19/06/21 16:00:27 INFO skein.ApplicationMaster: REQUESTED: dask.worker_806
19/06/21 16:00:27 WARN skein.ApplicationMaster: FAILED: dask.worker_804 - Could not obtain block: BP-1234110000-10.174.17.184-1561122672601:blk_1073741831_1007 file=/user/hadoop/.skein/application_1561122685021_0003/FED3ABF369AAE224B4BB8A3A77120E1C/cached_volume.sqlite3
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1234110000-10.174.17.184-1561122672601:blk_1073741831_1007 file=/user/hadoop/.skein/application_1561122685021_0003/FED3ABF369AAE224B4BB8A3A77120E1C/cached_volume.sqlite3
    at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:983)
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
    at java.io.DataInputStream.read(DataInputStream.java:100)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:267)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

无限


Tags: runorghadoopapacheutilhdfsjavaconcurrent
1条回答
网友
1楼 · 发布于 2024-06-06 13:03:04

如果使用主构造函数,则可以使用worker_restartskwarg设置工作进程重新启动的最大次数:

# Allow a maximum of 3 worker restarts before failure
cluster = YarnCluster(worker_restarts=3, ...)

或者,如果使用custom specification,可以使用max_restarts指定允许的最大重新启动次数。你知道吗

# /path/to/spec.yaml
name: dask
queue: myqueue

services:
  dask.worker:
    # Don't start any workers initially
    instances: 0
    # A maximum of 3 worker failures are allowed before failure
    max_restarts: 3
    # Restrict workers to 4 GiB and 2 cores each
    resources:
      memory: 4 GiB
      vcores: 2
    # Distribute this python environment to every worker node
    files:
      environment: /path/to/my/environment.tar.gz
    # The bash script to start the worker
    # Here we activate the environment, then start the worker
    script: |
      source environment/bin/activate
      dask-yarn services worker

相关问题 更多 >