修改多个URL的scraper代码

2024-04-25 17:54:56 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试下载同一个网站上几个页面上的所有图像。我有一些代码可以从一个页面抓取所有的图像,但是找不到一个简单的方法让它对几个url重复这个过程。你知道吗

import re
import requests
from bs4 import BeautifulSoup

site = 'SiteNameHere'

response = requests.get(site)

soup = BeautifulSoup(response.text, 'html.parser')
img_tags = soup.find_all('img')

urls = [img['src'] for img in img_tags]

for url in urls:
    filename = re.search(r'/([\w_-]+[.](jpg|gif|png))$', url)
    with open(filename.group(1), 'wb') as f:
        if 'http' not in url:
            url = '{}{}'.format(site, url)
        response = requests.get(url)
        f.write(response.content)

Tags: in图像importreurlimggetresponse
1条回答
网友
1楼 · 发布于 2024-04-25 17:54:56

我会尝试在for循环中添加/更改每个url请求。请看下面的示例(使用requests模块和lxml):

import lxml.html
import requests

#List of CSID numbers
ListOfNumbers = ['12','132','455']

for CSID in CSIDList:   

     UrlCompleted = ('http://www.chemspider.com/ChemicalStructure.%s.html?rid' %CSID)
     ChemSpiderPage = requests.get(UrlCompleted)
     html = lxml.html.fromstring(ChemSpiderPage.content)
     #Xpath describing the location of the string (see "text()" at the end)
     MolecularWeight = html.xpath('//*[@id="ctl00_ctl00_ContentSection_ContentPlaceHolder1_RecordViewDetails_rptDetailsView_ctl00_structureHead"]/ul[1]/li[2]/text()')

          try:
               print (CSID, MolecularWeight)
               print (MolecularWeight)   
          except:
               print 'ERROR'

相关问题 更多 >