通过“page=”u中段“刮分页

2024-05-12 19:22:04 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试从这个网页以及后面的900多个网页中搜集数据:https://hansard.parliament.uk/search/Contributions?endDate=2019-07-11&page=1&searchTerm=%22climate+change%22&startDate=1800-01-01&partial=True

重要的是scraper不以分页链接为目标,而是遍历url中的“page=”数字。这是因为当前的数据动态加载到原始网页中,分页链接指向原始网页

我试过写一些东西,通过pagination ul的“last”类在url中的页码中循环,以找到最终的页面,但我不确定如何定位url的特定部分,同时保持每个结果的搜索查询相同

r = requests.get(url_pagination)
soup = BeautifulSoup(r.content, "html.parser")

page_url = "https://hansard.parliament.uk/search/Contributions?endDate=2019-07-11&page={}" + "&searchTerm=%22climate+change%22&startDate=1800-01-01&partial=True"
last_page = soup.find('ul', class_='pagination').find('li', class_='last').a['href'].split('=')[1]
dept_page_url = [page_url.format(i) for i in range(1, int(last_page)+1)]

print(dept_page_url)

理想情况下,我只想从类“secondaryTitle”中刮取名称,以及每行包含日期的第二个未命名div

我不断得到一个错误:ValueError:invalid literal for int()with base 10:'2019-07-11&;'搜索术语'


Tags: 数据httpsurl网页searchpagepaginationlast
2条回答

您可以尝试此脚本,但请注意,它从第1页一直到最后一页966

import requests
from bs4 import BeautifulSoup

next_page_url = 'https://hansard.parliament.uk/search/Contributions?endDate=2019-07-11&page=1&searchTerm=%22climate+change%22&startDate=1800-01-01&partial=True'

# this goes to page '966'
while True:
    print('Scrapping {} ...'.format(next_page_url))
    r = requests.get(next_page_url)

    soup = BeautifulSoup(r.content, "html.parser")
    for secondary_title, date in zip(soup.select('.secondaryTitle'), soup.select('.secondaryTitle + *')):
        print('{: >20} - {}'.format(date.get_text(strip=True), secondary_title.get_text(strip=True)))

    next_link = soup.select_one('a:has(span:contains(Next))')
    if next_link:
        next_page_url = 'https://hansard.parliament.uk' + next_link['href'] + '&partial=True'
    else:
        break

印刷品:

Scrapping https://hansard.parliament.uk/search/Contributions?endDate=2019-07-11&page=1&searchTerm=%22climate+change%22&startDate=1800-01-01&partial=True ...
     17 January 2007 - Ian Pearson
    21 December 2017 - Baroness Vere of Norbiton
          2 May 2019 - Lord Parekh
     4 February 2013 - Baroness Hanham
    21 December 2017 - Baroness Walmsley
     9 February 2010 - Colin Challen
     6 February 2002 - Baroness Farrington of Ribbleton
       24 April 2007 - Barry Gardiner
     17 January 2007 - Rob Marris
        7 March 2002 - The Parliamentary Under-Secretary of State, Department for Environment, Food and Rural Affairs (Lord Whitty)
     27 October 1999 - Mr. Tom Brake  (Carshalton and Wallington)
     9 February 2004 - Baroness Miller of Chilthorne Domer
        7 March 2002 - The Secretary of State for Environment, Food and Rural Affairs (Margaret Beckett)
    27 February 2007 - 
      8 October 2008 - Baroness Andrews
       24 March 2011 - Lord Henley
    21 December 2017 - Lord Krebs
    21 December 2017 - Baroness Young of Old Scone
        16 June 2009 - Mark Lazarowicz
        14 July 2006 - Lord Rooker
Scrapping https://hansard.parliament.uk/search/Contributions?endDate=2019-07-11&searchTerm=%22climate+change%22&startDate=1800-01-01&page=2&partial=True ...
     12 October 2006 - Lord Barker of Battle
     29 January 2009 - Lord Giddens


... and so on.

你的错误是因为你使用了错误的号码从你的分裂。你想要-1.观察:

last_page = soup.find('ul', class_='pagination').find('li', class_='last').a['href']
print(last_page)
print(last_page.split('=')[1])
print(last_page.split('=')[-1])

提供:

/search/Contributions?endDate=2019-07-11&searchTerm=%22climate+change%22&startDate=1800-01-01&page=966

拆分时使用1

2019-07-11&searchTerm

对-1

966

为了从你想要的每一个页面获得信息,我会像其他答案一样使用css选择器和压缩。下面是一些其他的循环构造,在给定请求数的情况下使用会话来提高效率


您可以发出初始请求并提取页数,然后循环这些页数。使用会话对象以提高连接重用的效率

import requests
from bs4 import BeautifulSoup as bs

def make_soup(s, page):
    page_url = "https://hansard.parliament.uk/search/Contributions?endDate=2019-07-11&page={}&searchTerm=%22climate+change%22&startDate=1800-01-01&partial=True"
    r = s.get(page_url.format(page))
    soup = bs(r.content, 'lxml')
    return soup

with requests.Session() as s:
    soup = make_soup(s, 1)
    pages = int(soup.select_one('.last a')['href'].split('page=')[1])
    for page in range(2, pages + 1):
        soup = make_soup(s, page)
        #do something with soup 

可以循环直到类last停止出现

import requests
from bs4 import BeautifulSoup as bs

present = True
page = 1
#results = {}

def make_soup(s, page):
    page_url = "https://hansard.parliament.uk/search/Contributions?endDate=2019-07-11&page={}&searchTerm=%22climate+change%22&startDate=1800-01-01&partial=True"
    r = s.get(page_url.format(page))
    soup = bs(r.content, 'lxml')
    return soup

with requests.Session() as s:
    while present:
        soup = make_soup(s, page)
        present = len(soup.select('.last')) > 0
        #results[page] = soup.select_one('.pagination-total').text
        #extract info
        page+=1

相关问题 更多 >