Django/芹菜-芹菜状态:错误:在时间限制内没有响应节点

2024-05-12 13:48:57 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试在我的生产服务器中部署一个简单的芹菜示例,我遵循了芹菜网站中关于将芹菜作为守护进程运行的教程http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#daemonizing,并且在/etc/default/celleryd

  1 # Name of nodes to start
  2 # here we have a single node
  3 CELERYD_NODES="w1"
  4 # or we could have three nodes:
  5 #CELERYD_NODES="w1 w2 w3"
  6 
  7 # Where to chdir at start.
  8 CELERYD_CHDIR="/home/audiwime/cidec_sw"
  9 
 10 # Python interpreter from environment.
 11 ENV_PYTHON="/usr/bin/python26"
 12 
 13 # How to call "manage.py celeryd_multi"
 14 CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
 15 
 16 # # How to call "manage.py celeryctl"
 17 CELERYCTL="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryctl"
 18 
 19 # Extra arguments to celeryd
 20 CELERYD_OPTS="--time-limit=300 --concurrency=8"
 21 
 22 # Name of the celery config module.
 23 CELERY_CONFIG_MODULE="celeryconfig"
 24 
 25 # %n will be replaced with the nodename.
 26 CELERYD_LOG_FILE="/var/log/celery/%n.log"
 27 CELERYD_PID_FILE="/var/run/celery/%n.pid"
 28 
 29 # Workers should run as an unprivileged user.
 30 CELERYD_USER="audiwime"
 31 CELERYD_GROUP="audiwime"
 32 
 33 export DJANGO_SETTINGS_MODULE="cidec_sw.settings"

但如果我跑

celery status

在终点站,我得到了这样的回答:

Error: No nodes replied within time constraint

我可以通过https://github.com/celery/celery/tree/3.0/extra/generic-init.d/中提供的芹菜脚本重新启动芹菜

/etc/init.d/celeryd restart
celeryd-multi v3.0.12 (Chiastic Slide)
> w1.one.cloudwime.com: DOWN
> Restarting node w1.one.cloudwime.com: OK

我能跑

python26 manage.py celeryd -l info

我在django中的任务运行良好,但是如果让守护进程完成它的工作,我就不会得到任何结果,甚至不会在/var/log/celeriy/w1.log中出错

我知道我的任务已经登记了因为我做了这个

from celery import current_app
def call_celery_delay(request):
    print current_app.tasks
    run.delay(request.GET['age'])
    return HttpResponse(content="celery task set",content_type="text/html")

在我的任务出现的时候我得到了一本字典

{'celery.chain': <@task: celery.chain>, 'celery.chunks': <@task: celery.chunks>, 'celery.chord': <@task: celery.chord>, 'tasks.add2': <@task: tasks.add2>, 'celery.chord_unlock': <@task: celery.chord_unlock>, **'tareas.tasks.run': <@task: tareas.tasks.run>**, 'tareas.tasks.add': <@task: tareas.tasks.add>, 'tareas.tasks.test_two_minute': <@task: tareas.tasks.test_two_minute>, 'celery.backend_cleanup': <@task: celery.backend_cleanup>, 'celery.map': <@task: celery.map>, 'celery.group': <@task: celery.group>, 'tareas.tasks.test_one_minute': <@task: tareas.tasks.test_one_minute>, 'celery.starmap': <@task: celery.starmap>}

但除此之外,我什么也得不到,任务没有结果,日志没有错误,什么也得不到。 有人能告诉我怎么了吗? 你是我唯一的希望。。。


Tags: torunpylogtaskmanageonew1
3条回答

我解决了我的问题,这是一个非常简单的解决方案,但也是一个奇怪的解决方案: 我所做的是:

$ /etc/init.d/celerybeat restart
$ /etc/init.d/celeryd restart
$ service celeryd restart

我必须按照这个顺序来做,否则我会得到一个难看的错误:在时间限制内没有节点响应。

这是因为芹菜守护进程可能无法启动。这是一个原因。所以请使用python manage.py celleryd--loglevel=INFO重新启动它

使用以下命令查找问题:

C_FAKEFORK=1 sh -x /etc/init.d/celeryd start

这通常是因为源项目中存在问题(权限问题、语法错误等)

如芹菜文件所述:

If the worker starts with “OK” but exits almost immediately afterwards and there is nothing in the log file, then there is probably an error but as the daemons standard outputs are already closed you’ll not be able to see them anywhere. For this situation you can use the C_FAKEFORK environment variable to skip the daemonization step

祝你好运

来源:Celery Docs

相关问题 更多 >