BeautifulSoup/Python-将HTML表转换为CSV并获取一列的ref

2024-06-07 08:33:03 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在抓取一个包含以下代码的HTML表:

import csv
import urllib2
from bs4 import BeautifulSoup

with open('listing.csv', 'wb') as f:
    writer = csv.writer(f)
    for i in range(39):
        url = "file:///C:/projects/HTML/Export.htm".format(i)
        u = urllib2.urlopen(url)
        try:
            html = u.read()
        finally:
            u.close()
        soup=BeautifulSoup(html)
        for tr in soup.find_all('tr')[2:]:
            tds = tr.find_all('td')
            row = [elem.text.encode('utf-8') for elem in tds]
            writer.writerow(row)

一切都很好,但我试图抓住列9的网址。它现在给我的是txt值,而不是URL。

另外,我的HTML中有两个表,不管怎样,跳过第一个表,直接使用第二个表构建csv文件?

任何帮助都是非常受欢迎的,因为我是Python新手,对于一个项目,我需要这个帮助,我正在自动进行每天的转换。

非常感谢!


Tags: csvinimporturlforhtmlallfind
2条回答

您应该在第8个td标记中访问a标记的href属性:

import csv
import urllib2
from bs4 import BeautifulSoup

records = []
for index in range(39):
    url = get_url(index)  # where is the formatting in your example happening?
    response = urllib2.urlopen(url)
    try:
        html = response.read()
    except Exception:
        raise
    else:
        my_parse(html)
    finally:
        try:
            response.close()
        except (UnboundLocalError, NameError):
            raise UnboundLocalError

def my_parse(html):
    soup = BeautifulSoup(html)
    table2 = soup.find_all('table')[1]
    for tr in table2.find_all('tr')[2:]:
        tds = tr.find_all('td')
        url = tds[8].a.get('href')
        records.append([elem.text.encode('utf-8') for elem in tds])
        # perhaps you want to update one of the elements of this last
        # record with the found url now?

# It's more efficient to write only once
with open('listing.csv', 'wb') as f:
    writer = csv.writer(f)
    writer.writerows(records)

我冒昧地基于索引定义了一个函数get_url,因为您的示例每次都会重读同一个文件,这是我猜您实际上并不需要的。我会把实现交给你。另外,我还添加了一些更好的异常处理。

同时,我还演示了如何从该网页的表中访问第二个表。

完全能够使用以下代码使其正常工作:

import csv
import urllib2
from bs4 import BeautifulSoup

#Grab second table from HTML
def my_parse(html):
    soup = BeautifulSoup(html)
    table2 = soup.find_all('table')[1]
    for tr in table2.find_all('tr')[2:]:
        tds = tr.find_all('td')
        url = tds[8].a.get('href')
    tds[8].a.replaceWith(url)
        records.append([elem.text.encode('utf-8') for elem in tds])

records = []
#Read HTML file into memory
for index in range(39):
    url = "file:///C:/projects/HTML/Export.htm".format(index)
    response = urllib2.urlopen(url)
    try:
        html = response.read()
    except Exception:
        raise
    else:
        my_parse(html)
    finally:
        try:
            response.close()
        except (UnboundLocalError, NameError):
            raise UnboundLocalError

#Writing CSV file
with open('listing.csv', 'wb') as f:
    writer = csv.writer(f)
    writer.writerows(records)

非常感谢你的帮助!!!!!

相关问题 更多 >

    热门问题