如何知道beautifulsoup网站中最后一页的网页号?

2024-04-28 23:24:39 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在从flipkart中收集数据,我想收集所有产品的名称、价格和评级。因此,我想从所有页面中获取所有必需的信息。 此链接共有11页: https://www.flipkart.com/mobiles/mi~brand/pr?sid=tyy%2C4io&otracker=nmenu_sub_Electronics_0_Mi 那个么,我怎样才能循环到最后一页,即第11页


Tags: 数据https名称com信息产品链接www
2条回答

第1页至第11页的url定义如下:

https://www.flipkart.com/mobiles/mi~brand/pr?sid=tyy%2C4io&otracker=nmenu_sub_Electronics_0_Mi&page={n}

    where n is from 1 to 11

因此,您可以创建一个循环,其中n=1到11,并用循环中的当前值替换n

from bs4 import BeautifulSoup
import requests
from itertools import zip_longest


def mxnum():
    r = requests.get(
        "https://www.flipkart.com/mobiles/mi~brand/pr?sid=tyy%2C4io&otracker=nmenu_sub_Electronics_0_Mi")
    soup = BeautifulSoup(r.text, 'html.parser')
    for item in soup.findAll("div", {'class': '_2zg3yZ'}):
        mxnum = list(item.strings)[0].split(" ")[-1]
    return int(mxnum) + 1


mxnum = mxnum()


def Parse():
    with requests.Session() as req:
        names = []
        prices = []
        rating = []
        for num in range(1, mxnum):
            print(f"Extracting Page# {num}")
            r = req.get(
                f"https://www.flipkart.com/mobiles/mi~brand/pr?sid=tyy%2C4io&otracker=nmenu_sub_Electronics_0_Mi&page={num}")
            soup = BeautifulSoup(r.text, 'html.parser')
            for name in soup.find_all("div", {'class': '_3wU53n'}):
                names.append(name.text)
            for price in soup.find_all("div", {'class': '_1vC4OE _2rQ-NK'}):
                prices.append(price.text[1:])
            for rate in soup.find_all("div", {'class': 'hGSR34'}):
                rating.append(rate.text)
    for a, b, c in zip_longest(names, prices, rating):
        print("Name: {}, Price: {}, Rate: {}".format(a, b, c))


Parse()

相关问题 更多 >