使用BeautifulSoup过滤HTML页面中的列表元素数据

-2 投票
2 回答
894 浏览
提问于 2025-05-10 10:07

我正在尝试从多个HTML页面中收集数据,特别是列表元素中的数据。我想把这些数据放到一个字典里,以便以后使用。我在提取数据时得到了我预期的结果,但在把数据放入字典时却出现了问题。我现在是覆盖了每个条目,而不是添加新的条目。有人能告诉我哪里出错了吗?

当前代码

from BeautifulSoup import BeautifulSoup
import requests
import re

person_dict = {}

.....
<snip>
<snip>
.....

soup = BeautifulSoup(response.text)

    div = soup.find('div', {'id': 'object-a'})
    ul = div.find('ul', {'id': 'object-a-1'})
    li_a = ul.findAll('a', {'class': 'title'})
    li_p = ul.findAll('p', {'class': 'url word'})
    li_po = ul.findAll('p')

    for a in li_a:
        nametemp = a.text
        name = (nametemp.split(' - ')[0])
        person_dict.update({'Name': name})     #I attempted updating
    for lip in li_p:
        person_dict['url'] = lip.text          #I attempted adding directly

    for email in li_po:   
        reg_emails = re.compile('[a-zA-Z0-9.]*' + '@')        
        person_dict['email'] = reg_emails.findall(email.text)

print person_dict # results in 1 entry being returned

测试数据

<div id="object-a">
    <ul id="object-a-1">
            <li>
              <a href="www.url.com/person" class="title">Person1</a>
              <p class="url word">www.url.com/Person1</p>
              <p>Person 1, some foobar possibly an email@address.com &nbsp;...</p>
            </li>


            <li>
              <a href="www.url.com/person" class="title">Person2</a>
              <p class="url word">www.url.com/Person1</p>
              <p>Person 2, some foobar possibly an email@address.com &nbsp;...</p>
            </li>


            <li>
              <a href="www.url.com/person" class="title">Person3</a>
              <p class="url word">www.url.com/Person1</p>
              <p>Person 3, some foobar, possibly an email@address.com &nbsp;...</p>
            </li>
    </ul>

相关文章:

  • 暂无相关问题
暂无标签

2 个回答

0

你可能走错了方向。试试这样做:

from BeautifulSoup import BeautifulSoup
import re

text = open('soup.html') # You are opening the file differently
soup = BeautifulSoup(text)
list_items = soup.findAll('li')

people = []

for item in list_items:
    name = item.find('a', {'class': 'title'}).text
    url = item.find('p', {'class': 'url word'}).text
    email_text = item.findAll('p')[1].text
    match = re.search(r'[\w\.-]+@[\w\.-]+', email_text)
    email = match.group(0)

    person = {'name': name, 'url': url, 'email': email}
    people.append(person)

print people
1

你是否需要使用字典,这完全取决于你自己。不过如果你决定用字典,可能为每个列表项准备一个单独的字典会更好,而不是把所有的条目放在一个字典里。

我建议你把所有的条目放在一个列表里。下面的代码展示了两种建议:一种是用tuple来存储每个项目的不同信息,另一种是使用字典。

如果你只是想显示这些信息或者把它写入文件,使用tuple会更快。

# Two possible ways of storing your data: a list of tuples, or a list of dictionaries
entries_tuples = []             
entries_dictionary = []

soup = BeautifulSoup(text)

div = soup.find('div', {'id': 'object-a'})
ul = div.find('ul', {'id': 'object-a-1'})

for li in ul.findAll('li'):
    title = li.find('a', {'class': 'title'})
    url_href = title.get('href')
    person = title.text
    url_word = li.find('p', {'class': 'url word'}).text
    emails = re.findall(r'\s+(\S+@\S+)(?:\s+|\Z)', li.findAll('p')[1].text, re.M)       # allow for multiple emails

    entries_tuples.append((url_href, person, url_word, emails))
    entries_dictionary.append({'url_href' : url_href, 'person' : person, 'url_word' : url_word, 'emails' : emails})

for url_href, person, url_word, emails in entries_tuples:
    print '{:25} {:10} {:25} {}'.format(url_href, person, url_word, emails)

print

for entry in entries_dictionary:
    print '{:25} {:10} {:25} {}'.format(entry['url_href'], entry['person'], entry['url_word'], entry['emails'])

对于你的示例HTML,下面的内容将会被显示:

www.url.com/person        Person1    www.url.com/Person1       [u'email@address.com']
www.url.com/person        Person2    www.url.com/Person1       [u'email@address.com']
www.url.com/person        Person3    www.url.com/Person1       [u'email@address.com', u'email@address.com']

www.url.com/person        Person1    www.url.com/Person1       [u'email@address.com']
www.url.com/person        Person2    www.url.com/Person1       [u'email@address.com']
www.url.com/person        Person3    www.url.com/Person1       [u'email@address.com', u'email@address.com']

需要注意的是,从文本中提取电子邮件地址本身就是一个完整的问题。上面的解决方案可能会匹配到一些实际上并不符合格式的电子邮件地址,但在这里是可以接受的。

撰写回答