如何在BeautifulSoup中处理多个URL并将数据转换为数据帧?

2024-05-23 18:10:35 发布

您现在位置:Python中文网/ 问答频道 /正文

我有我想要用来获取数据的URL的列表。我可以为一个url执行以下操作:

URL list = ['https://www2.daad.de/deutschland/studienangebote/international-programmes/en/detail/4722/',
'https://www2.daad.de/deutschland/studienangebote/international-programmes/en/detail/6318/'


from bs4 import BeautifulSoup
import requests

url = "https://www2.daad.de/deutschland/studienangebote/international-programmes/en/detail/4479/"
page = requests.get(url)

soup = BeautifulSoup(page.text, "html.parser")

info = soup.find_all("dl", {'class':'c-description-list c-description-list--striped'})

comp_info = pd.DataFrame()
cleaned_id_text = []
for i in info[0].find_all('dt'):
    cleaned_id_text.append(i.text)
cleaned_id__attrb_text = []
for i in info[0].find_all('dd'):
    cleaned_id__attrb_text.append(i.text)


df = pd.DataFrame([cleaned_id__attrb_text], column = cleaned_id_text)

但我不知道如何对几个URL执行此操作,并将数据附加到dataframe中。每个URL都描述了课程描述,所以我想创建一个数据框,其中包含所有URL的所有数据。。。如果我能在dataframe中添加URL作为单独的列,那就太好了


Tags: texthttpsinfoidurldelistinternational
1条回答
网友
1楼 · 发布于 2024-05-23 18:10:35
import requests
from bs4 import BeautifulSoup
import pandas as pd


numbers = [4722, 6318]


def Main(url):
    with requests.Session() as req:
        for num in numbers:
            r = req.get(url.format(num))
            soup = BeautifulSoup(r.content, 'html.parser')
            target = soup.find(
                "dl", class_="c-description-list c-description-list striped")
            names = [item.text for item in target.findAll("dt")]
            data = [item.get_text(strip=True) for item in target.findAll("dd")]
            df = pd.DataFrame([data], columns=names)
            df.to_csv("data.csv", index=False, mode="a")


Main("https://www2.daad.de/deutschland/studienangebote/international-programmes/en/detail/{}/")

按用户请求更新:

import requests
from bs4 import BeautifulSoup
import pandas as pd


def Main(urls):
    with requests.Session() as req:
        allin = []
        for url in urls:
            r = req.get(url)
            soup = BeautifulSoup(r.content, 'html.parser')
            target = soup.find(
                "dl", class_="c-description-list c-description-list striped")
            names = [item.text for item in target.findAll("dt")]
            names.append("url")
            data = [item.get_text(strip=True) for item in target.findAll("dd")]
            data.append(url)
            allin.append(data)
        df = pd.DataFrame(allin, columns=names)
        df.to_csv("data.csv", index=False)


urls = ['https://www2.daad.de/deutschland/studienangebote/international-programmes/en/detail/4722/',
        'https://www2.daad.de/deutschland/studienangebote/international-programmes/en/detail/6318/']
Main(urls)

相关问题 更多 >