Celery + RabbitMQ + Docker中的“查找超时”错误是什么,如何解决?

0 投票
1 回答
43 浏览
提问于 2025-04-14 17:02

我有一个使用FastAPI的应用程序,里面还用到了Celery和RabbitMQ,都是在Docker(使用Compose)里运行的。下面是我的Compose配置:

services:
scraper:
    build:
      context: fastapi-scrapers
      dockerfile: Dockerfile
    cpus: "0.7"
    # ports:
    #   - '8000:8000'
    environment:
      - TIMEOUT="120"
      - WEB_CONCURRENCY=2
    networks:
      - scrape-net
    volumes:
      - ../images/:/app/images/:rw
    extra_hosts:
      - "host.docker.internal:host-gateway"

flower:
    image: mher/flower
    ports:
      - '5555:5555'
    environment:
      - CELERY_BROKER_URL=amqp://admin:pass@rabbitmq:5672/
      # - CELERY_BROKER_URL=redis://redis:6379/0
      - FLOWER_BASIC_AUTH=admin:pass
    depends_on:
      - scraper
    networks:
      - scrape-net

rabbitmq:
    image: "rabbitmq:latest"
    # ports:
      # - '5672:5672'
      # - "15672:15672"
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=pass
    networks:
      - scrape-net
    extra_hosts:
      - "host.docker.internal:host-gateway"

networks:
  scrape-net:
    driver: bridge

这是FastAPI应用程序的Dockerfile:

FROM python:3.9

WORKDIR /code

COPY ./requirements.txt /code/requirements.txt

RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

COPY ./app /code/app

# CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
CMD ["bash", "-c", "celery -A app.celery.tasks worker --loglevel=info --concurrency=8 -E -P eventlet & uvicorn app.main:app --host 0.0.0.0 --port 8000"]

这是应用程序里的Celery代码:

celery_app = Celery('tasks', broker='amqp://admin:pass@rabbitmq:5672/')

celery_app.conf.update(
    CELERY_RESULT_EXPIRES=3600,
    CELERY_AMQP_TASK_RESULT_EXPIRES=3600
)

FastAPI应用程序的日志:

site-scraper-1  | INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
site-scraper-1  | INFO:     Started parent process [8]
site-scraper-1  | INFO:     Started server process [11]
site-scraper-1  | INFO:     Waiting for application startup.
site-scraper-1  | INFO:     Application startup complete.
site-scraper-1  | INFO:     Started server process [10]
site-scraper-1  | INFO:     Waiting for application startup.
site-scraper-1  | INFO:     Application startup complete.
site-scraper-1  | /usr/local/lib/python3.9/site-packages/celery/platforms.py:829: SecurityWarning: You're running the worker with superuser privileges: this is
site-scraper-1  | absolutely not recommended!
site-scraper-1  | 
site-scraper-1  | Please specify a different user using the --uid option.
site-scraper-1  | 
site-scraper-1  | User information: uid=0 euid=0 gid=0 egid=0
site-scraper-1  | 
site-scraper-1  |   warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
site-scraper-1  |  
site-scraper-1  |  -------------- celery@d4dec9482220 v5.3.6 (emerald-rush)
site-scraper-1  | --- ***** ----- 
site-scraper-1  | -- ******* ---- Linux-5.10.0-26-amd64-x86_64-with-glibc2.36 2024-03-10 22:26:52
site-scraper-1  | - *** --- * --- 
site-scraper-1  | - ** ---------- [config]
site-scraper-1  | - ** ---------- .> app:         tasks:0x7fd3a198f2e0
site-scraper-1  | - ** ---------- .> transport:   amqp://admin:**@rabbitmq:5672//
site-scraper-1  | - ** ---------- .> results:     disabled://
site-scraper-1  | - *** --- * --- .> concurrency: 8 (eventlet)
site-scraper-1  | -- ******* ---- .> task events: ON
site-scraper-1  | --- ***** ----- 
site-scraper-1  |  -------------- [queues]
site-scraper-1  |                 .> celery           exchange=celery(direct) key=celery
site-scraper-1  |                 
site-scraper-1  | 
site-scraper-1  | [tasks]
site-scraper-1  |   . app.celery.tasks.start_scrape
site-scraper-1  | 
site-scraper-1  | [2024-03-10 22:26:52,909: WARNING/MainProcess] /usr/local/lib/python3.9/site-packages/celery/worker/consumer/consumer.py:507: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
site-scraper-1  | whether broker connection retries are made during startup in Celery 6.0 and above.
site-scraper-1  | If you wish to retain the existing behavior for retrying connections on startup,
site-scraper-1  | you should set broker_connection_retry_on_startup to True.
site-scraper-1  |   warnings.warn(
site-scraper-1  | 
site-scraper-1  | [2024-03-10 22:27:13,338: ERROR/MainProcess] consumer: Cannot connect to amqp://admin:**@rabbitmq:5672//: [Errno -3] Lookup timed out.
site-scraper-1  | Trying again in 2.00 seconds... (1/100)
site-scraper-1  | 
site-scraper-1  | [2024-03-10 22:27:35,769: ERROR/MainProcess] consumer: Cannot connect to amqp://admin:**@rabbitmq:5672//: [Errno -3] Lookup timed out.
site-scraper-1  | Trying again in 4.00 seconds... (2/100)

这个系统之前运行得很好,已经好几个月了,但现在FastAPI应用程序出现了错误。RabbitMQ的日志看起来没问题,用户也创建好了,RabbitMQ实例也正常启动。Flower容器能够顺利连接到RabbitMQ容器,只有FastAPI容器出现了问题。

我尝试在amqp里用localhost代替rabbitmq,但没有成功。用docker.host.internal也不行。我还尝试把Python应用的依赖更新到最新版本,但也没有改变什么。我试过用Redis作为消息代理,但同样出现了超时错误,所以我觉得问题可能出在我的Python应用上。

1 个回答

0

后来发现这个问题在更新后的Python Docker版本中已经修复了。所以我进行了更新:

FROM python:3.9

变成了

FROM python:latest

现在一切又正常工作了。

撰写回答