我试着像蜘蛛一样跑scrapy
scrapy crawl myspider
但它会引发一个错误
我注意到这个错误是因为scrapy
是用python 3.5
而不是python 3.6
运行的
(sport_databasevenv) futilestudio@DRUHY-ubuntu-s-1vcpu-1gb-fra1-01-1538126914964-s-2vcpu-2gb-fra1-:~/sport_database/scraping$ scrapy crawl op_index
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/usr/local/lib/python3.5/dist-packages/scrapy/cmdline.py", line 149, in execute
cmd.crawler_process = CrawlerProcess(settings)
File "/usr/local/lib/python3.5/dist-packages/scrapy/crawler.py", line 249, in __init__
super(CrawlerProcess, self).__init__(settings)
但是当我这么做的时候,我看到激活的env使用了python 3.6:
(sport_databasevenv) futilestudio@DRUHY-ubuntu-s-1vcpu-1gb-fra1-01-1538126914964-s-2vcpu-2gb-fra1-:~/sport_database/scraping$ python
Python 3.6.5 (default, May 3 2018, 10:08:28)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
那么问题出在哪里呢
目前没有回答
相关问题 更多 >
编程相关推荐