擦肩而过

2024-05-13 00:41:32 发布

您现在位置:Python中文网/ 问答频道 /正文

我已经通过Chrome的Inspect工具识别了其他页面。类型为XHR,页面通过2个数字区分。 “https://us.pandora.net/en/charms/?sz=30&start=30&;format=page element”是第一页, “https://us.pandora.net/en/charms/?sz=30&start=60&;format=page element”是第二页, “https://us.pandora.net/en/charms/?sz=30&start=90&;format=page element”是第三页等

一直持续到第990页。你知道吗

以下是我目前的代码:

from urllib.request import urlopen
from bs4 import BeautifulSoup
url = "https://us.pandora.net/en/charms/?sz=30&start=60&format=page-element"
html = urlopen(url)

page_count = 0
while page_count < 0:
    url = "https://us.pandora.net/en/charms/?sz=30&start=%d&format=page-element" %(page_count)
    page_count += 30

html = urlopen(url)

我的目标是,得到所有在售的产品。 通过使用inspect阅读源代码,我发现在售商品有两类:“价格销售”和“价格标准”。你知道吗

在这里,我试图获得所有的产品,使用上面的代码来破解无限滚动条,并在一个列表中获得所有有销售的产品。你知道吗

def retrieve_products_sale():
    all_products = soup.find_all('li', class_='grid-tile')
    num_of_prods = []
    for items in all_products:
        if items == class_'price-standard':
            num_of_prods.append(items)
    print(num_of_prods)
if __name__ == '__main__':
    retrieve_products_sale()

不知道怎么从这里开始。你知道吗

让我补充一下: 我的最终目标是把所有在售的产品都列在清单上。包括多少种产品,以及每种产品的百分比。你知道吗


Tags: httpsformaturlnet产品countpageelement
2条回答

您可以在函数中创建while循环,并使用.select()而不是find_all(),以避免定义exrta循环来过滤所需的项。你知道吗

import requests
from bs4 import BeautifulSoup    

url = "https://us.pandora.net/en/charms/?sz=30&start={}&format=page-element"

def fetch_items(link,page):
    while page<=100:
        print("current page no: ",page)
        res = requests.get(link.format(page),headers={"User-Agent":"Mozilla/5.0"})
        soup = BeautifulSoup(res.text,"lxml")
        for items in soup.select('.grid-tile .price-standard'):
            product_list.append(items)

        print(product_list)
        page+=30

if __name__ == '__main__':
    page = 0
    product_list = []
    fetch_items(url,page)

可能是这样的

from urllib.request import urlopen
from bs4 import BeautifulSoup    

def retrieve_products_sale(soup):
    all_products = soup.find_all('li', class_='grid-tile')
    num_of_prods = []
    for items in all_products:
        if items == class_'price-standard':
            num_of_prods.append(items)
    print(num_of_prods)

if __name__ == '__main__':
    page_count = 0        
    while page_count <= 990:
        url = "https://us.pandora.net/en/charms/?sz=30&start=%d&format=page-element" % page_count
        html = urlopen(url)
        soup = BeautifulSoup(html, "html.parser")
        retrieve_products_sale(soup)
        page_count += 30

如果您需要一个列表中的所有数据,那么使用list-outside函数

from urllib.request import urlopen
from bs4 import BeautifulSoup    

def retrieve_products_sale(soup):
    all_products = soup.find_all('li', class_='grid-tile')
    num_of_prods = []
    for items in all_products:
        if items == class_'price-standard':
            num_of_prods.append(items)
    #print(num_of_prods)
    return num_of_prods

if __name__ == '__main__':
    page_count = 0      
    all_results = []  
    while page_count <= 990:
        url = "https://us.pandora.net/en/charms/?sz=30&start=%d&format=page-element" % page_count
        html = urlopen(url)
        soup = BeautifulSoup(html, "html.parser")
        all_results += retrieve_products_sale(soup)
        page_count += 30

    print(all_results)

编辑:我不知道你想用它做什么

 if items == class_'price-standard':

所以我用

 for x in items.find_all(class_='price-standard'):

它给出了一些结果(但不是所有页面)

from urllib.request import urlopen
from bs4 import BeautifulSoup    

def retrieve_products_sale(soup):
    all_products = soup.find_all('li', class_='grid-tile')
    num_of_prods = []
    for items in all_products:
        for x in items.find_all(class_='price-standard'):
            #print(x)
            num_of_prods.append(x)
    print(num_of_prods)

if __name__ == '__main__':
    page_count = 0        
    while page_count <= 990:
        url = "https://us.pandora.net/en/charms/?sz=30&start=%d&format=page-element" % page_count
        html = urlopen(url)
        soup = BeautifulSoup(html, "html.parser")
        retrieve_products_sale(soup)
        page_count += 30

相关问题 更多 >