如何从每一个网站抓取更多的网址

2024-04-27 00:06:26 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试加载在特定站点的站点地图上找到的所有链接,然后加载所有这些链接并获取更多数据(股票、大小和ID)。到目前为止,代码找到了所有的链接,并将它们转换为.json,但是当它加载每个站点并获取更多数据时,它只对sitemap中最后一个不同的链接执行此操作。我需要它来做的所有链接在网站地图。如果有人能帮我,那就太棒了!在

谢谢:)

def check_endpoint():
    url = 'https://shopnicekicks.com/sitemap_products_1.xml'
    page = requests.get(url)
    soup = BeautifulSoup(page.text,'lxml')
    for url in soup.find_all('loc'): #load the url and find all product links.
        produrl = url.text
        UrlDB.append(produrl)
        endpoint = produrl + '.json' #take a product links and convert to .json
        JsonUrl = endpoint 

    #load each product link and find variants.
    req = requests.get(JsonUrl)
    reqJson = json.loads(req.text) 
    CartLink = JsonUrl.split("/")[2]    
    CartLink = "https://{}".format(CartLink)

    for product in reqJson['product']['variants']:
        Variants = product['id']
        Size = product['title']
        Price = product['price']                   
        Stock = product['inventory_quantity']
        atclink = "Size = {}, Stock = {}, Link = {}, /cart/{}:1 ".format(Size, Stock, CartLink, Variants)
        print (atclink) #print all variants 


return

Tags: andtextjsonurlsize站点链接all