如何使用python和beautifulsoup4为网站中的多个页面创建webscraping?

2024-06-16 11:17:06 发布

您现在位置:Python中文网/ 问答频道 /正文

我想刮Traveloka.com网站和成功获得客户评论数据。但是,问题就在一页纸上。我需要获得客户评论的所有数据(第1、2、3页…)。。。。更多)。我正在使用beautifulsoup4。我试图修改代码并查看教程,但仍然不起作用。请帮帮我。下面是我的密码。你知道吗

导入

from urllib.request import urlopen as uReq

from bs4 import BeautifulSoup as soup

URL链接

my_url = 'https://www.traveloka.com/id-id/hotel/indonesia/horison-ultima-bandung-2000000081026?spec=26-05-2019.27-05-2019.1.1.HOTEL.2000000081026.Horison%20Ultima%20Bandung.2&prevSearchId=1634474608622074440&loginPromo=1&contexts=%7B%7D'

HTML解析

page_soup = soup(page_html, "html.parser")

抓住每个客户

containers = page_soup.findAll("div",{"class":"_2K0Zb _278Mz"})  #div reviews

循环

对于集装箱中的集装箱:

username_container = container.findAll("div",{"class":"css-76zvg2 r-1inkyih r-b88u0q"}) # review (username)
username = username_container[0].text

tanggal_container = container.findAll("div",{"class":"css-76zvg2 r-1ud240a r-1b43r93 r-b88u0q r-1d4mawv r-tsynxw"}) # review (tanggal)
tanggal = tanggal_container[0].text

deskripsi_container = container.findAll("div",{"class":"css-1dbjc4n r-1wzrnnt"}) # review (deskripsi)
deskripsi = deskripsi_container[0].text

print("username : " + username)
print("tanggal : " + tanggal)
print("deskripsi : " + deskripsi)

Tags: textdivcom客户containerpageusernamecss