如何使用BeautifulSoup获取下一页?

2024-04-24 08:41:57 发布

您现在位置:Python中文网/ 问答频道 /正文

我写的代码,以提取所有产品从给定的网址它的作品很好,但有些网址包含许多网页,所以我试图得到所有的下一页找到ul保存网页网址的问题是,它只显示前3页和最后一页 enter image description here
分页ul 你知道吗

    <li class="plp-pagination__nav disable">
           <a href="" rel="prev" class="plp-pagination__navpre">
             previous </a>
         </li>
    <li class="plp-pagination__nav active"><a class="plp-pagination__navpages" href="javascript:void(0);">1</a></li>
            <li class="plp-pagination__nav"><a class="plp-pagination__navpages" href="here is the page url ">2</a></li>
                <li class="plp-pagination__nav"><a class="plp-pagination__navpages" href="here is the page url">3</a></li>
                <li class="plp-pagination__nav"><a class="plp-pagination__navpages" href="here is the page url">4</a></li>
                <li class="plp-pagination__nav"><a class="plp-pagination__navpages" href="here is the page url">5</a></li>
                <li class="plp-pagination__nav"> <span class="plp-pagination__navplaceholder"></span></li>
             <li class="plp-pagination__nav"><a class="plp-pagination__navpages" href="here is the page url">54</a></li>
       <li class="plp-pagination__nav">
            <a class="plp-pagination__navnext" href="here is the page url" rel="next">
                  next</a>
            </li>
    </ul>

读取函数

def update():
    df = pd.DataFrame( columns=['poduct_name','image_url','price'])
    #lsit of required pages 
    urls= ['1st page','2nd page','3rd page']

    for url in urls:
        page = requests.get(url)
        soup = BeautifulSoup(page.text)
        #get the list of pages in pagination ul   
        new_pages= soup.find('ul', attrs={'class':'plp-pagination__wrapper'})
        #check if there is pagination ul
        if(new_pages!=None):
            new_urls= new_pages.find_all('li', attrs={'class':'plp-pagination__navpages'})
            for x in new_urls: 
                 urls.append(x)
        product_div= soup.find_all('div', attrs={'class':'comp-productcard__wrap'})
        product_list=[]
        for x in product_div:
            poduct_name= x.find('p', attrs={'class':'comp-productcard__name'}).text.strip()
            product_price_p= x.find('p', attrs={'class':'comp-productcard__price'}).text
            product_img= x.img['src']
            product_list.append({'poduct_name':poduct_name,'image_url':product_img,'price':product_price})
            df = df.append(pd.DataFrame(product_list))
    return df

Tags: thenameurlhereispagepaginationli
2条回答

从外观上看,这个网站是Carrefour。 这大概就是我应该怎么做的(伪代码)。你知道吗

有人会要求第一页。请求所述页面后,可以使用类plp-pagination__navnext获取锚定。然后使用这个锚的href作为下一个请求的URL。开始时没有所有页面URL的列表。在请求一个页面后,您将刮取下一个页面的URL并请求它。你知道吗

伪代码:

1. Load first page
2. Scrape whatever you're looking to scrape
3. Get href of next page element via selector 'a.pagination__navnext'
4. Load the next page (its URL is the href you just acquired)
5. Repeat from step 2
Stop when reached last page, AKA when next page elem's href is '' on Carrefour.

您可以通过添加以下脚本绕过此问题:

urls= []
home_page = requests.get("https://www.carrefourksa.com/mafsau/en/food-beverages/c/FKSA1000000?&qsort=relevance&pg")
home_soup = BeautifulSoup(home_page.content, "lxml")
page_nmb_find = home_soup.findAll("a", {"class":"plp-pagination__navpages"})
last_page = int(page_nmb_find[-1].getText())

for nmb in range(0,last_page):
    urls.append(f"https://www.carrefourksa.com/mafsau/en/food-beverages/c/FKSA1000000?&qsort=relevance&pg={nmb}")

总之,您的代码应该如下所示:

def update():
    df = pd.DataFrame( columns=['poduct_name','image_url','price'])
    #lsit of required pages 
    urls= []
    home_page = requests.get("https://www.carrefourksa.com/mafsau/en/food-beverages/c/FKSA1000000?&qsort=relevance&pg")
    home_soup = BeautifulSoup(home_page.content, "lxml")
    page_nmb_find = home_soup.findAll("a", {"class":"plp-pagination__navpages"})
    last_page = int(page_nmb_find[-1].getText())
    for nmb in range(0,last_page):
        urls.append(f"https://www.carrefourksa.com/mafsau/en/food-beverages/c/FKSA1000000?&qsort=relevance&pg={nmb}")

    for url in urls:
        page = requests.get(url)
        soup = BeautifulSoup(page.text, "lxml")
        #get the list of pages in pagination ul   
        new_pages= soup.find('ul', attrs={'class':'plp-pagination__wrapper'})
        #check if there is pagination ul
        if(new_pages!=None):
            new_urls= new_pages.find_all('li', attrs={'class':'plp-pagination__navpages'})
            for x in new_urls: 
                 urls.append(x)
        product_div= soup.find_all('div', attrs={'class':'comp-productcard__wrap'})
        product_list=[]
        for x in product_div:
            poduct_name= x.find('p', attrs={'class':'comp-productcard__name'}).text.strip()
            product_price_p= x.find('p', attrs={'class':'comp-productcard__price'}).text
            product_img= x.img['src']
            product_list.append({'poduct_name':poduct_name,'image_url':product_img,'price':product_price_p})
            df = df.append(pd.DataFrame(product_list))
    return df

(PS:似乎product_price不存在,所以我用product_price_p替换了它)

希望这有帮助!你知道吗

相关问题 更多 >