Python获取公共GitHub存储库中的csv文件列表

2024-04-19 08:51:54 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试使用Python从public repository中提取一些csv文件。我有代码来处理数据后,我有文件的URL。对于GitHub,是否有类似于ls的东西?我在GitHub的API中没有看到任何东西,而且似乎可以使用PyCurl,但是我需要通过html进行解析。有没有什么预构建的方法可以做到这一点


Tags: 文件csv数据方法代码githubapiurl
1条回答
网友
1楼 · 发布于 2024-04-19 08:51:54

BeautifulSoup(粗制滥造且可能非常低效)解决方案:

# Import the required packages: 
from bs4 import BeautifulSoup
import requests
import pandas as pd
import re 

# Store the url as a string scalar: url => str
url = "https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports"

# Issue request: r => requests.models.Response
r = requests.get(url)

# Extract text: html_doc => str
html_doc = r.text

# Parse the HTML: soup => bs4.BeautifulSoup
soup = BeautifulSoup(html_doc)

# Find all 'a' tags (which define hyperlinks): a_tags => bs4.element.ResultSet
a_tags = soup.find_all('a')

# Store a list of urls ending in .csv: urls => list
urls = ['https://raw.githubusercontent.com'+re.sub('/blob', '', link.get('href')) 
        for link in a_tags  if '.csv' in link.get('href')]

# Store a list of Data Frame names to be assigned to the list: df_list_names => list
df_list_names = [url.split('.csv')[0].split('/')[url.count('/')] for url in urls]

# Initialise an empty list the same length as the urls list: df_list => list
df_list = [pd.DataFrame([None]) for i in range(len(urls))]

# Store an empty list of dataframes: df_list => list
df_list = [pd.read_csv(url, sep = ',') for url in urls]

# Name the dataframes in the list, coerce to a dictionary: df_dict => dict
df_dict = dict(zip(df_list_names, df_list))

相关问题 更多 >