用BeautifulSoup从网站上抓取每个元素

2024-03-29 06:53:27 发布

您现在位置:Python中文网/ 问答频道 /正文

我写了一个代码刮一个房地产网站。这是链接:

https://www.nekretnine.rs/stambeni-objekti/stanovi/lista/po-stranici/10/

从这个页面我只能得到公寓的位置,大小和价格,但是否有可能写一个代码,将进入每一个公寓的页面,并从中刮取值,因为它包含了更多的信息。检查此链接:

https://www.nekretnine.rs/stambeni-objekti/stanovi/arena-bulevar-arsenija-carnojevica-97m-2-lode-energoprojekt/NkvJK0Ou5tV/

我贴了一个密码。我注意到,我的网址变化时,我点击特定的房地产。例如:

arena-bulevar-arsenija-carnojevica-97m-2-lode-energoprojekt/NkvJK0Ou5tV/

我教过创建for循环,但是没有办法知道它是如何变化的,因为它的末尾有一些id号:

NkvJK0Ou5tV

这是我的代码:

from bs4 import BeautifulSoup
import requests

website = "https://www.nekretnine.rs/stambeni-objekti/stanovi/lista/po-stranici/10/"

soup = requests.get(website).text
my_html = BeautifulSoup(soup, 'lxml')

lokacija = my_html.find_all('p', class_='offer-location text-truncate')
ukupna_kvadratura = my_html.find_all('p', class_='offer-price offer-price--invert')
ukupna_cena = my_html.find_all('div', class_='d-flex justify-content-between w-100')
ukupni_opis = my_html.find_all('div', class_='mt-1 mb-1 mt-lg-0 mb-lg-0 d-md-block offer-meta-info offer-adress')


for lok, kvadratura, cena_stana, sumarno in zip(lokacija, ukupna_kvadratura, ukupna_cena, ukupni_opis):

    lok = lok.text.split(',')[0] #lokacija

    kv = kvadratura.span.text.split(' ')[0] #kvadratura
    jed = kvadratura.span.text.split(' ')[1] #jedinica mere

    cena = cena_stana.span.text #cena

    sumarno = sumarno.text

    datum = sumarno.split('|')[0].strip()
    status = sumarno.split('|')[1].strip()
    opis = sumarno.split('|')[2].strip()

    print(lok, kv, jed, cena, datum, status, opis)

Tags: 代码textmyhtmlallfindclasssplit
2条回答

您可以从div^{cl1}获取href$

您可以遍历页面底部分页提供的链接:

from bs4 import BeautifulSoup as soup
import requests
d = soup(requests.get('https://www.nekretnine.rs/stambeni-objekti/stanovi/lista/po-stranici/10/').text, 'html.parser')
def scrape_page(page):
   return [{'title':i.h2.get_text(strip=True), 'loc':i.p.get_text(strip=True), 'price':i.find('p', {'class':'offer-price'}).get_text(strip=True)} for i in page.find_all('div', {'class':'row offer'})]

result = [scrape_page(d)]
while d.find('a', {'class':'pagination-arrow arrow-right'}):
   d = soup(requests.get(f'https://www.nekretnine.rs{d.find("a", {"class":"pagination-arrow arrow-right"})["href"]}').text, 'html.parser')
   result.append(scrape_page(d))

相关问题 更多 >