漂亮的汤解析网页

2024-05-13 13:19:57 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试使用BS刮取以下网页:https://www.racingpost.com。 例如,我想提取所有课程名称。课程名称在此标记下:

<span class="rh-cardsMatrix__courseName">Wincanton</span>

我的代码在这里:

from bs4 import BeautifulSoup
import requests
import pandas as pd
url = "https://www.racingpost.com"
response = requests.get(url)
data = response.text
soup =  BeautifulSoup(data, "html.parser")
pages = soup.find_all('span',{'class':'rh-cardsMatrix__courseName'})
for page in pages:
    print(page.text)

我没有得到任何输出。我认为它在解析方面有一些问题,我已经尝试了所有可用的BS解析器。有人能给我建议吗?甚至可以用BS吗


Tags: httpsimportcomurlbswwwrequestsclass
3条回答

感谢mattbasta的回答,它引导我回答了这个问题,解决了我的问题: soup=BeautifulSoup(数据,“html.parser”) pages=soup.find_all('span',{'class':'rh-cardsMatrix___courseName'})

PyQt4 to PyQt5 -> mainFrame() deprecated, need fix to load web pages

查看https://www.racingpost.com的源代码时,没有任何元素具有类名rh-cardsMatrix__courseName。在页面上查询它表明在呈现页面时它确实存在。这表明具有该类名的元素是用JavaScript生成的,而BeautifulSoup不支持JavaScript(它不运行JavaScript)

相反,您希望在网页上找到返回创建这些元素的数据的端点(例如,查找数据的XHR),并使用这些端点获取所需的数据

您正在查找的数据似乎隐藏在原始HTML末尾的脚本块中

您可以尝试以下方法:

import requests
from bs4 import BeautifulSoup
import json
import pandas as pd
from pandas import json_normalize

url = 'https://www.racingpost.com'
res = requests.get(url).text

raw = res.split('cardsMatrix":{"courses":')[1].split(',"date":"2020-03-06","heading":"Tomorrow\'s races"')[0]
data = json.loads(raw)
df = json_normalize(data)

输出:

id  abandoned   allWeather  surfaceType     colour  name    countryCode     meetingUrl  hashName    meetingTypeCode     races
0   1083    False   True    Polytrack   3   Chelmsford  GB  /racecards/1083/chelmsford-aw/2020-03-06    chelmsford-aw   Flat    [{'id': 753047, 'abandoned': False, 'result': ...
1   1212    False   False       4   Ffos Las    GB  /racecards/1212/ffos-las/2020-03-06     ffos-las    Jumps   [{'id': 750498, 'abandoned': False, 'result': ...
2   1138    False   True    Polytrack   11  Dundalk     IRE     /racecards/1138/dundalk-aw/2020-03-06   dundalk-aw  Flat    [{'id': 753023, 'abandoned': False, 'result': ...
3   513     False   True    Tapeta  5   Wolverhampton   GB  /racecards/513/wolverhampton-aw/2020-03-06  wolverhampton-aw    Flat    [{'id': 750658, 'abandoned': False, 'result': ...
4   565     False   False       0   Jebel Ali   UAE     /racecards/565/jebel-ali/2020-03-06     jebel-ali   Flat    [{'id': 753155, 'abandoned': False, 'result': ...
5   206     False   False       0   Deauville   FR  /racecards/206/deauville/2020-03-06     deauville   Flat    [{'id': 753186, 'abandoned': False, 'result': ...
6   54  True    False       1   Sandown     GB  /racecards/54/sandown/2020-03-06    sandown     Jumps   [{'id': 750510, 'abandoned': True, 'result': F...
7   30  True    False       2   Leicester   GB  /racecards/30/leicester/2020-03-06  leicester   Jumps   [{'id': 750501, 'abandoned': True, 'result': F...

注意:请注意,您必须手动搜索字符串,以便在末尾正确分割res

编辑:更强大的解决方案。

要获取脚本块的总数并从中进行解析,请尝试以下代码:

url = 'https://www.racingpost.com'
res = requests.get(url).content
soup = BeautifulSoup(res)

# salient data seems to be in 20th script block 
data = soup.find_all("script")[19].text
clean = data.split('window.__PRELOADED_STATE = ')[1].split(";\n")[0]
clean = json.loads(clean)
clean.keys()

输出:

['stories', 'bookmakers', 'panelTemplate', 'cardsMatrix', 'advertisement']

然后检索保存到cardsMatrix键的数据:

parsed = json_normalize(clean["cardsMatrix"]).courses.values[0]
pd.DataFrame(parsed)

再次输出上述内容(但采用更稳健的解决方案):

id  abandoned   allWeather  surfaceType     colour  name    countryCode     meetingUrl  hashName    meetingTypeCode     races
0   1083    False   True    Polytrack   3   Chelmsford  GB  /racecards/1083/chelmsford-aw/2020-03-06    chelmsford-aw   Flat    [{'id': 753047, 'abandoned': False, 'result': ...
1   1212    False   False       4   Ffos Las    GB  /racecards/1212/ffos-las/2020-03-06     ffos-las    Jumps   [{'id': 750498, 'abandoned': False, 'result': ...

相关问题 更多 >