从脚本运行Scrapy时访问我的爬行器生成的项目

2024-03-29 06:08:55 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在从Python脚本中调用一个刮皮蜘蛛。我希望能够访问爬行器从我的脚本中生成的项目。但是我不知道怎么做

脚本运行良好,调用了spider,并生成了正确的项目,但我不知道如何从脚本访问这些项目

这是脚本的代码

Class UASpider(scrapy.Spider):
     name = 'uaspider'
     start_urls = ['http://httpbin.org/user-agent']

     def parse(self, response):
         payload = json.loads(response.body.decode(response.encoding))
         yield {'ua':payload}

 def main():
     process = CrawlerProcess(get_project_settings())
     process.crawl(UASpider)
     process.start() # the script will block here until the crawling is finished

 if (__name__ == '__main__'):
     main()

这是日志中显示spider工作正常并生成项目的部分

2020-02-18 20:44:10 [scrapy.core.engine] INFO: Spider opened
2020-02-18 20:44:10 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-02-18 20:44:10 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-02-18 20:44:10 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://httpbin.org/user-agent> (referer: None)
2020-02-18 20:44:10 [scrapy.core.scraper] DEBUG: Scraped from <200 http://httpbin.org/user-agent>
{'ua': {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'}}
2020-02-18 20:44:10 [scrapy.core.engine] INFO: Closing spider (finished)

非常感谢你的帮助


我可以想到的一个选项是创建一个存储项目的管道,然后从该存储访问项目:

  • 为此,我需要在脚本中配置管道(而不是在项目设置中)
  • 另外,最好是存储在变量中,而不是文件中(我这样做是为了自动化测试,速度很重要)


Tags: 项目orgcoreinfo脚本httpmainresponse
1条回答
网友
1楼 · 发布于 2024-03-29 06:08:55

我已经按照@Gallaecio的建议完成了这项工作,谢谢

此解决方案使用将值存储在全局变量中的管道。从Scrapy项目读取设置,并在脚本中添加额外的管道,以避免更改总体设置

下面是使它工作的代码

user_agent = ''

Class UASpider(scrapy.Spider):
     name = 'uaspider'
     start_urls = ['http://httpbin.org/user-agent']

     def parse(self, response):
         payload = json.loads(response.body.decode(response.encoding))
         yield {'ua':payload}

class TempStoragePipeline(object):
    def process_item(self, item, spider):
        user_agent = item.get('ua').get('user-agent')
        return item

def main():
    settings = get_project_settings()
    settings.set('ITEM_PIPELINES', {
        '__main__.TempStoragePipeline': 100
    })

    process = CrawlerProcess(get_project_settings())
    process.crawl(UASpider)
    process.start() # the script will block here until the crawling is finished

if (__name__ == '__main__'):
    print(f'>>> {user_agent}'
    main()

相关问题 更多 >