使用Scrapy爬虫将抓取的数据加载到Heroku PG数据库。

0 投票
1 回答
567 浏览
提问于 2025-04-18 04:43

我刚接触heroku pg。现在我在做的事情是写了一个scrapy爬虫,运行得很顺利,没有任何错误。问题是我想把所有抓取到的数据放到我的heroku postgres数据库里。为此,我大致按照这个教程进行操作。

当我在本地机器上运行爬虫,使用命令scrapy crawl spidername时,它运行得很好,但抓取到的数据没有被插入,也没有在heroku数据库中创建任何表。我在本地终端上甚至没有收到任何错误信息。这是我的代码...

settings.py

BOT_NAME = 'crawlerconnectdatabase'

SPIDER_MODULES = ['crawlerconnectdatabase.spiders']
NEWSPIDER_MODULE = 'crawlerconnectdatabase.spiders'

DATABASE = {'drivername': 'postgres',
        'host': 'ec2-54-235-250-41.compute-1.amazonaws.com',
        'port': '5432',
        'username': 'dtxwjcycsaweyu',
        'password': '***',
        'database': 'ddcir2p1u2vk07'}

items.py

from scrapy.item import Item, Field

class CrawlerconnectdatabaseItem(Item):
    name = Field()
    url = Field()
    title = Field()
    link = Field()
    page_title = Field()
    desc_link = Field()
    body = Field()
    news_headline = Field()
    pass

models.py

from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.engine.url import URL
import settings

DeclarativeBase = declarative_base()


def db_connect():

    return create_engine(URL(**settings.DATABASE))


def create_deals_table(engine):

    DeclarativeBase.metadata.create_all(engine)


class Deals(DeclarativeBase):
"""Sqlalchemy deals model"""
    __tablename__ = "news_data"

    id = Column(Integer, primary_key=True)
    body = Column('body', String)

pipelines.py

from sqlalchemy.orm import sessionmaker
from models import Deals, db_connect, create_deals_table

class CrawlerconnectdatabasePipeline(object):

    def __init__(self):
        engine = db_connect()
        create_deals_table(engine)
        self.Session = sessionmaker(bind=engine)

    def process_item(self, item, spider):
        session = self.Session()
        deal = Deals(**item)

        try:
            session.add(deal)
            session.commit()
        except:
            session.rollback()
            raise
        finally:
            session.close()

        return item

spider

关于scrapy爬虫的代码,你可以在这里找到。

1 个回答

0

你需要在你的settings.py文件中添加这行代码:ITEM_PIPELINES = {'crawlerconnectdatabase.pipelines.CrawlerconnectdatabasePipeline': 300,}。

撰写回答