在页面之间拆分数据时使用beautifulsoup进行刮取

2024-04-30 01:54:54 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图刮一个网站的一部分,因为我想把它转移到excel更容易操作

该网站是this link

我的代码在第一页数据中运行良好,但正如您所看到的,列表跨越多个页面,要访问这些页面,需要将&page=#number of page添加到地址中。我想我可以迭代我的代码并将元素附加到panda数组中。但是,我找不到如何检测最后一页

当数据在多个页面之间拆分时是这样做的吗?谢谢你的帮助

import requests
import pandas as pd
from bs4 import BeautifulSoup

pd.set_option('display.max_colwidth', -1)
pd.options.display.float_format = '{:,.2f}'.format

url = "https://www.boursorama.com/bourse/produits-de-bourse/levier/warrants/resultats?\
warrant_filter%5Bnature%5D=1&\
warrant_filter%5BunderlyingType%5D=&\
warrant_filter%5BunderlyingName%5D=TESLA&\
warrant_filter%5Bmaturity%5D=0&\
warrant_filter%5BdeltaMin%5D=&\
warrant_filter%5BdeltaMax%5D=&\
warrant_filter%5Bissuer%5D=&\
warrant_filter%5Bsearch%5D="

def parse_html_table(table):
        n_columns = 0
        n_rows=0
        column_names = []

        # Find number of rows and columns
        # we also find the column titles if we can
        for row in table.find_all('tr'):

            # Determine the number of rows in the table
            td_tags = row.find_all('td')
            if len(td_tags) > 0:
                n_rows+=1
                if n_columns == 0:
                    # Set the number of columns for our table
                    n_columns = len(td_tags)

            # Handle column names if we find them
            th_tags = row.find_all('th')
            if len(th_tags) > 0 and len(column_names) == 0:
                for th in th_tags:
                    column_names.append(th.get_text())

        # Safeguard on Column Titles
        if len(column_names) > 0 and len(column_names) != n_columns:
            raise Exception("Column titles do not match the number of columns")

        columns = column_names if len(column_names) > 0 else range(0,n_columns)
        df = pd.DataFrame(columns = columns,
                          index= range(0,n_rows))
        row_marker = 0
        for row in table.find_all('tr'):
            column_marker = 0
            columns = row.find_all('td')
            for column in columns:
                df.iat[row_marker,column_marker] = column.get_text()
                column_marker += 1
            if len(columns) > 0:
                row_marker += 1

        # Convert to float if possible
        for col in df:
            try:
                df[col] = df[col].astype(float)
            except ValueError:
                pass

        return df


response = requests.get(url)
soup = BeautifulSoup(response.text, 'lxml')
#import pdb; pdb.set_trace()

table=soup.find_all('table')[0]
df=parse_html_table(table)
df=df.replace({'\n': ''}, regex=True)

Tags: columnsindfforlenifnamestags
2条回答

通常我会得到最后一页,并得到所有的页面,但这个网站不让我得到最后一页。 此过程将在检查最后一页后完成。 pandas.read_html非常简单

 import pandas as pd
 import requests
 from bs4 import BeautifulSoup

 url = "https://www.boursorama.com/bourse/produits-de-bourse/levier/warrants/resultats?\
 warrant_filter%5Bnature%5D=1&\
 warrant_filter%5BunderlyingType%5D=&\
 warrant_filter%5BunderlyingName%5D=TESLA&\
 warrant_filter%5Bmaturity%5D=0&\
 warrant_filter%5BdeltaMin%5D=&\
 warrant_filter%5BdeltaMax%5D=&\
 warrant_filter%5Bissuer%5D=&\
 warrant_filter%5Bsearch%5D="

frames = []
i = 0
for i in range(19):
     r = requests.get(url+'page={}'.format(i))
     df_list = pd.read_html(r.text)
     df = df_list[0]
     frames.append(df)
     i += 1
res = pd.concat(frames, ignore_index=True)

为什么不获取最后一个分页链接(要么>>,要么在示例url中8),并从href属性提取最后一个页面?就像这样:

pagination_links = soup.findAll("a", {"class" : "c-pagination__link"})
last_page = pagination_links[-1]['href'].split('page=')[-1]

相关问题 更多 >