scrapy:以编程方式将参数传递给crawler

2024-04-19 08:26:20 发布

您现在位置:Python中文网/ 问答频道 /正文

我在做一个刮蹭的爬虫。我有一个python模块,它从数据库获取url,并且应该配置scrapy为每个url启动一个spider。因为我是从我的脚本开始的,我不知道如何像在命令行开关-a中那样传递参数,以便每个调用都接收到不同的url。在

这是那个讨厌的来电者的密码

def scrape_next_url() :

conn = _mysql.connect(host, username, password, database_name)
conn.query("select min(sortorder) from url_queue where processed = false for update")
query_result = conn.store_result()
url_index = query_result.fetch_row()[0][0]

conn.query("select url from url_queue where sortorder = " + str(url_index))
query_result = conn.store_result()
url_at_index = query_result.fetch_row()[0][0]

conn.query("update url_queue set processed = true where sortorder = " + str(url_index))
conn.commit()
conn.close()

settings = Settings()
os.environ['SCRAPY_SETTINGS_MODULE'] = 'webscraper.settings'
settings_module_path = os.environ['SCRAPY_SETTINGS_MODULE']
settings.setmodule(settings_module_path, priority='project')

process = CrawlerProcess(settings)
ImageSpider.start_urls.append(url_at_index)
process.crawl(ImageSpider)
process.start()

救命啊!在

注意:我遇到了一个问题(Scrapy: Pass arguments to cmdline.execute()),但如果可能的话,我想用程序来完成。在

编辑:

我已经按照你的建议,有以下蜘蛛代码:

^{pr2}$

打电话的人:

    process = CrawlerProcess(settings)
    process.crawl(ImageSpider, url=url_at_index)

我知道这个参数被传递给init,因为如果没有网址.strip()呼叫失败。但结果是蜘蛛会跑,但不会爬行任何东西:

(webcrawler) faisca:webscraper dlsa$ python scraper_launcher.py 
2017-07-25 00:42:16 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: webscraper)
2017-07-25 00:42:16 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'webscraper', 'NEWSPIDER_MODULE': 'webscraper.spiders', 'SPIDER_MODULES': ['webscraper.spiders']}
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.memusage.MemoryUsage']
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled item pipelines:
['webscraper.pipelines.WebscraperPipeline']
2017-07-25 00:42:16 [scrapy.core.engine] INFO: Spider opened
2017-07-25 00:42:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-07-25 00:42:16 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023

Tags: infourlindexsettingsextensionsresultconnquery