我计划使用一个url列表来连续刮几个页面,使用下面的代码
有没有一种聪明的方法可以通过引用一个扩展的url列表(可以是CSV或Excel文件)来替换“所需的google查询”的手动插入的术语
from bs4 import BeautifulSoup
import urllib.request
import csv
desired_google_queries = ['Word' , 'lifdsst', 'yvou', 'should', 'load']
for query in desired_google_queries:
url = 'http://google.com/search?q=' + query
req = urllib.request.Request(url, headers={'User-Agent' : "Magic Browser"})
response = urllib.request.urlopen( req )
html = response.read()
soup = BeautifulSoup(html, 'html.parser')
resultStats = soup.find(id="resultStats").string
print(resultStats)
with open('queries.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ',
quotechar='|', quoting=csv.QUOTE_MINIMAL)
spamwriter.writerow(['query', 'resultStats'])
for query in desired_google_queries:
...
spamwriter.writerow([query, resultStats])
您可以将刮取逻辑放入函数中,然后对从
.csv
文件读取的每个query
调用它相关问题 更多 >
编程相关推荐