$ scrapy crawl help
Usage
=====
scrapy crawl [options] <spider>
Run a spider
Options
=======
help, -h show this help message and exit
-a NAME=VALUE set spider argument (may be repeated)
output=FILE, -o FILE dump scraped items into FILE (use - for stdout)
output-format=FORMAT, -t FORMAT
format to use for dumping items with -o
Global Options
logfile=FILE log file. if omitted stderr will be used
loglevel=LEVEL, -L LEVEL
log level (default: DEBUG)
nolog disable logging completely
profile=FILE write python cProfile stats to FILE
pidfile=FILE write process ID to FILE
set=NAME=VALUE, -s NAME=VALUE
set/override setting (may be repeated)
pdb enable pdb on failure
请参见-o的条目:
output=FILE, -o FILE dump scraped items into FILE (use - for stdout)
请参见-t的条目:
output-format=FORMAT, -t FORMAT
format to use for dumping items with -o
许多命令行工具都提供了自己的使用指南
不带参数运行
scrapy crawl
,或在常规-h
/help
中传递:请参见
-o
的条目:请参见
-t
的条目:把它放在一起
表示“将爬网命令的结果转储到reddit.csv文件中,并在写入文件时使用csv格式”
您还可以查看官方文档的Using the scrapy tool部分。每个命令的命令用法和选项可能不同
相关问题 更多 >
编程相关推荐