从URL Lis获取Python和ulsoup请求

2024-04-26 22:40:34 发布

您现在位置:Python中文网/ 问答频道 /正文

我计划使用一个url列表来连续刮几个页面,使用下面的代码

有没有一种聪明的方法可以通过引用一个扩展的url列表(可以是CSV或Excel文件)来替换“所需的google查询”的手动插入的术语

from bs4 import BeautifulSoup
import urllib.request
import csv

desired_google_queries = ['Word' , 'lifdsst', 'yvou', 'should', 'load']

for query in desired_google_queries:

    url = 'http://google.com/search?q=' + query

    req = urllib.request.Request(url, headers={'User-Agent' : "Magic Browser"})
    response = urllib.request.urlopen( req )
    html = response.read()

    soup = BeautifulSoup(html, 'html.parser')

    resultStats = soup.find(id="resultStats").string
    print(resultStats)

with open('queries.csv', 'w', newline='') as csvfile:
    spamwriter = csv.writer(csvfile, delimiter=' ',
              quotechar='|', quoting=csv.QUOTE_MINIMAL)
      spamwriter.writerow(['query', 'resultStats'])
      for query in desired_google_queries:
      ...
      spamwriter.writerow([query, resultStats])

Tags: csvimporturl列表forrequesthtmlgoogle
1条回答
网友
1楼 · 发布于 2024-04-26 22:40:34

您可以将刮取逻辑放入函数中,然后对从.csv文件读取的每个query调用它

from bs4 import BeautifulSoup
import urllib.request
import csv


def scrape_site(query):
    url = 'http://google.com/search?q=' + query

    req = urllib.request.Request(url, headers={'User-Agent' : "Magic Browser"})
    response = urllib.request.urlopen( req )
    html = response.read()

    soup = BeautifulSoup(html, 'html.parser')

    resultStats = soup.find(id="resultStats").string
    return resultStats

##################################################### 
# Read in queries from .csv to desired_google_queries

with open('queries.csv', 'w', newline='') as csvfile:
    spamwriter = csv.writer(csvfile, delimiter=' ',
              quotechar='|', quoting=csv.QUOTE_MINIMAL)
    spamwriter.writerow(['query', 'resultStats'])

    for query in desired_google_queries:
       resultStats = scrape_site(query)
       spamwriter.writerow([query, resultStats])

相关问题 更多 >