到heroku的Srapyrt docker部署未按预期工作

2024-06-06 08:31:05 发布

您现在位置:Python中文网/ 问答频道 /正文

我对dockerheroku相当陌生,我无法让我的docker容器在heroku上运行。我知道范围可能看起来很广,但我会尽力解释我自己(耐心点),然后你会注意到我的问题是一行语句

我想做什么

使用scrapyrt托管一个简单的restful刮板(scrapyrt在dockerhubhere上已经有一个docker映像)

我一直在做什么

1。创建了Dockerfile

   //Dockerfile

   FROM scrapinghub/scrapyrt as sunflower
   COPY . /scrapyrt/project
   EXPOSE $PORT

2。运行“构建并测试我的刮板”

//On Terminal

$ sudo docker build -t scrapyrt .
$ sudo docker run -d --name sunflower -p 9080:9080 scrapyrt 

i go to http://localhost:9080/crawl.json?start_requests=true&spider_name=onlineStores the scraper is working. Everything looks okay.

3。现在成为heroku的主持人

//On Terminal

$ heroku create sunflower-spiders
$ sudo heroku container:login
$ sudo docker build -t registry.heroku.com/sunflower-spiders/web .
$ sudo docker push registry.heroku.com/sunflower-spiders/web
$ sudo heroku container:release --app sunflower-spiders web

I go to https://sunflower-spiders.herokuapp.com its a crashes app it say i have to go to heroku logs --tail

//heroku logs --tail

2020-04-13T09:10:53.977093+00:00 heroku[web.1]: State changed from crashed to starting
2020-04-13T09:11:05.171071+00:00 heroku[web.1]: State changed from starting to crashed
2020-04-13T09:11:05.033342+00:00 app[web.1]: usage: scrapyrt [-h] [-p PORT] [-i IP] [--project PROJECT] [-s name=value]
2020-04-13T09:11:05.033383+00:00 app[web.1]: [-S project.settings]
2020-04-13T09:11:05.033391+00:00 app[web.1]: scrapyrt: error: unrecognized arguments: scrapyrt
2020-04-13T10:35:31.260864+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=spider-sunflower.herokuapp.com request_id=bb5dfdd9-3100-420a-bcfd-22c0318f4b9a fwd="102.167.137.10" dyno= connect= service= status=503 bytes= protocol=https
2020-04-13T10:35:45.535340+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/crawl.json" host=spider-sunflower.herokuapp.com request_id=64ea25be-c596-456c-838f-33c80440de5b fwd="102.167.137.10" dyno= connect= service= status=503 bytes= protocol=https

Am i missing something

Source Codes

额外的

#SCRAPINGHUB/SCRAPYRT DOCKERFILE

# To build:
# > sudo docker build -t scrapyrt .
#
# to start as daemon with port 9080 of api exposed as 9080 on host
# and host's directory ${PROJECT_DIR} mounted as /scrapyrt/project
#
# > sudo docker run -p 9080:9080 -tid -v ${PROJECT_DIR}:/scrapyrt/project scrapyrt
#

FROM ubuntu:14.04

ENV DEBIAN_FRONTEND noninteractive

RUN apt-get update && \
    apt-get install -y python python-dev  \
    libffi-dev libxml2-dev libxslt1-dev zlib1g-dev libssl-dev wget

RUN mkdir -p /scrapyrt/src /scrapyrt/project
RUN mkdir -p /var/log/scrapyrt

RUN wget -O /tmp/get-pip.py "https://bootstrap.pypa.io/get-pip.py" && \
    python /tmp/get-pip.py "pip==9.0.1" && \
    rm /tmp/get-pip.py 

ADD . /scrapyrt/src
RUN pip install /scrapyrt/src

WORKDIR /scrapyrt/project

ENTRYPOINT ["scrapyrt", "-i", "0.0.0.0"]

EXPOSE 9080

heroku上的错误日志error log


Tags: piptodockerrundevprojectcomapp