如何格式化刮板输出

2024-05-19 03:05:19 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图推断出一个网站的价格,以便创建一个刮板我写下了下面的程序。为了得到所有的html代码,我使用了BeautifulSoup和默认的html.parser。然后我尝试使用一个名为generale equals to soup.findAll(“span”)的变量来清理信息。然后我需要进一步清理(列表(我想)它已经创建)以获得价格,我被卡住了。有什么建议吗?我不知道如何思考才能解决这个问题

import smtplib

import time

from bs4 import BeautifulSoup as bs

import requests

URL = "https://www.allkeyshop.com/blog/buy-battlefield-5-cd-key-compare-prices/"

headers = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0"}

def Check_page1():

    page = requests.get(URL, headers=headers)

    soup = bs(page.content, 'html.parser')

    generale = soup.findAll('span')

    price = ?

    print(price)

    print(generale)

print(Check_page1())

Tags: importparserurlbshtmlcheck价格requests
2条回答

似乎没有<span class="price">。 我就是这么做的

In [1]: import requests 
   ...:  
   ...: URL = "https://www.allkeyshop.com/blog/buy-battlefield-5-cd-key-compare-prices/" 
   ...:  
   ...: headers = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0"} 
Out[1]: {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0'}

In [2]: page = requests.get(URL, headers=headers)                                                        
Out[2]: <Response [200]>

In [3]: import re                                                                                        

In [4]: re.findall(r'<span.*?</span>', page.text)

有很多跨度。对我来说,下面这些看起来最像价格

 '<span class="topclick-list-element-price">10.56&euro;</span>',
 '<span class="topclick-list-element-price">2.79&euro;</span>',
 '<span class="topclick-list-element-price">2.90&euro;</span>',
 '<span class="topclick-list-element-price">27.86&euro;</span>',
 '<span class="topclick-list-element-price">11.15&euro;</span>',
 '<span class="topclick-list-element-price">11.46&euro;</span>'

所以我改进了正则表达式

In [7]: prices = [float(p) for p in re.findall(r'<span class="topclick-list-element-price">(.*)&euro;</span>', pag
   ...: e.text)] 

In [8]: print(prices)                                                                                    
[10.56, 2.79, 2.9, 27.86, 11.15, 11.46, 11.2, 18.67, 9.69, 24.25,
20.25, 19.59, 44.21, 28.3, 31.92, 41.39, 4.76, 24.57, 8.75, 28.62, 
27.14, 8.52, 31.95, 24.59, 27.93, 27.86, 5.5, 24.99, 37.99, 14.27, 
36.0, 8.75, 35.99, 37.34, 23.4, 22.98, 31.95, 36.89, 25.57, 27.9, 
35.88, 41.39, 33.22, 42.29, 31.29, 42.29, 38.09, 33.89, 33.59, 28.83,
10.56, 2.79, 2.9, 27.86, 11.15, 11.46, 11.2, 18.67, 9.69, 24.25, 
20.25, 19.59, 44.21, 28.3, 31.92, 41.39, 4.76, 24.57, 8.75, 28.62, 
27.14, 8.52, 31.95, 24.59, 27.93, 27.86, 5.5, 24.99, 37.99, 14.27, 
36.0, 8.75, 35.99, 37.34, 23.4, 22.98, 31.95, 36.89, 25.57, 27.9, 
35.88, 41.39, 33.22, 42.29, 31.29, 42.29, 38.09, 33.89, 33.59, 28.83, 
24.25, 12.11, 28.84, 37.36, 23.71, 2.19, 2.99, 34.25, 11.38, 14.99, 
20.67, 4.99, 25.56, 1.81, 12.99, 19.73, 9.99, 9.99, 0.92, 11.99, 
27.93, 22.94, 8.46, 32.78, 40.03, 11.19, 12.45, 13.29, 13.9, 26.22, 
26.22, 23.34, 25.22, 32.78, 37.36, 21.5, 19.01, 26.53, 24.91, 17.96, 
35.4, 17.05, 21.56, 16.39, 35.4, 8.98, 65.54, 13.45, 15.73, 22.39, 
17.99, 40.17, 8.0, 11.34, 14.99, 17.99, 10.99, 24.99, 22.41, 17.99, 
40.17, 7.2, 49.99, 41.1, 39.85, 16.99, 19.99, 21.99, 10.99, 19.73, 
14.99, 22.39, 6.55, 32.98, 27.99, 29.89, 19.99, 29.99, 37.36, 19.99, 
35.49, 15.99, 21.99, 46.71, 15.72, 42.97, 18.68, 18.87, 15.72, 19.99,
 29.99, 9.99, 28.02, 35.99, 39.99, 15.72, 15.72, 9.33, 44.48, 47.99, 
43.99, 47.99, 38.8, 23.27, 20.69, 44.6, 41.97, 15.75, 44.49, 19.87, 
51.99, 36.89, 15.99, 39.99, 27.99, 11.58, 43.99, 41.1, 19.99, 43.64, 
19.99, 36.89, 25.69]

当您查看页面的源代码时,可以看到您正在查找类名为<span>price,并且可以按以下方式进行分析:

import time

import requests
from bs4 import BeautifulSoup as bs

URL = "https://www.allkeyshop.com/blog/buy-battlefield-5-cd-key-compare-prices/"
headers = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0"}

def CheckPage1():
    page = requests.get(URL, headers=headers)
    soup = bs(page.content, 'html.parser')

    # all spans with prices
    span_prices = soup.findAll("span", {"class": "price"})

    # to get all prices you need to extract text or content attribute
    for span in span_prices:
        price = span.text
        # remove whitespace and print price
        print(price.strip())

        # to get prices without money sign uncomment one of those lines
        # print(price.strip()[:-1])
        # print(price.strip().strip('€'))

CheckPage1()

相关问题 更多 >

    热门问题